00:00:00.001 Started by upstream project "autotest-per-patch" build number 126254 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.055 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.056 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.058 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.081 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.118 Using shallow fetch with depth 1 00:00:00.118 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.118 > git --version # timeout=10 00:00:00.155 > git --version # 'git version 2.39.2' 00:00:00.155 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.194 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.194 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.988 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.004 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.019 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.019 > git config core.sparsecheckout # timeout=10 00:00:05.030 > git read-tree -mu HEAD # timeout=10 00:00:05.048 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.072 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.072 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.179 [Pipeline] Start of Pipeline 00:00:05.196 [Pipeline] library 00:00:05.198 Loading library shm_lib@master 00:00:07.231 Library shm_lib@master is cached. Copying from home. 00:00:07.269 [Pipeline] node 00:00:07.361 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.363 [Pipeline] { 00:00:07.380 [Pipeline] catchError 00:00:07.382 [Pipeline] { 00:00:07.428 [Pipeline] wrap 00:00:07.443 [Pipeline] { 00:00:07.458 [Pipeline] stage 00:00:07.461 [Pipeline] { (Prologue) 00:00:07.636 [Pipeline] sh 00:00:07.917 + logger -p user.info -t JENKINS-CI 00:00:07.936 [Pipeline] echo 00:00:07.937 Node: CYP12 00:00:07.944 [Pipeline] sh 00:00:08.256 [Pipeline] setCustomBuildProperty 00:00:08.295 [Pipeline] echo 00:00:08.297 Cleanup processes 00:00:08.302 [Pipeline] sh 00:00:08.585 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.585 104687 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.599 [Pipeline] sh 00:00:08.885 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.885 ++ grep -v 'sudo pgrep' 00:00:08.885 ++ awk '{print $1}' 00:00:08.885 + sudo kill -9 00:00:08.885 + true 00:00:08.905 [Pipeline] cleanWs 00:00:08.914 [WS-CLEANUP] Deleting project workspace... 00:00:08.914 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.920 [WS-CLEANUP] done 00:00:08.924 [Pipeline] setCustomBuildProperty 00:00:08.935 [Pipeline] sh 00:00:09.213 + sudo git config --global --replace-all safe.directory '*' 00:00:09.281 [Pipeline] httpRequest 00:00:09.298 [Pipeline] echo 00:00:09.299 Sorcerer 10.211.164.101 is alive 00:00:09.307 [Pipeline] httpRequest 00:00:09.311 HttpMethod: GET 00:00:09.311 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.312 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.314 Response Code: HTTP/1.1 200 OK 00:00:09.315 Success: Status code 200 is in the accepted range: 200,404 00:00:09.315 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:10.408 [Pipeline] sh 00:00:10.693 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:10.710 [Pipeline] httpRequest 00:00:10.736 [Pipeline] echo 00:00:10.738 Sorcerer 10.211.164.101 is alive 00:00:10.747 [Pipeline] httpRequest 00:00:10.752 HttpMethod: GET 00:00:10.752 URL: http://10.211.164.101/packages/spdk_a83ad116ad9e96cd017a455fe18f2048177986b5.tar.gz 00:00:10.753 Sending request to url: http://10.211.164.101/packages/spdk_a83ad116ad9e96cd017a455fe18f2048177986b5.tar.gz 00:00:10.769 Response Code: HTTP/1.1 200 OK 00:00:10.770 Success: Status code 200 is in the accepted range: 200,404 00:00:10.770 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a83ad116ad9e96cd017a455fe18f2048177986b5.tar.gz 00:01:01.003 [Pipeline] sh 00:01:01.294 + tar --no-same-owner -xf spdk_a83ad116ad9e96cd017a455fe18f2048177986b5.tar.gz 00:01:03.856 [Pipeline] sh 00:01:04.142 + git -C spdk log --oneline -n5 00:01:04.142 a83ad116a scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default 00:01:04.142 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:01:04.142 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:04.142 2d30d9f83 accel: introduce tasks in sequence limit 00:01:04.142 2728651ee accel: adjust task per ch define name 00:01:04.155 [Pipeline] } 00:01:04.172 [Pipeline] // stage 00:01:04.180 [Pipeline] stage 00:01:04.182 [Pipeline] { (Prepare) 00:01:04.197 [Pipeline] writeFile 00:01:04.212 [Pipeline] sh 00:01:04.546 + logger -p user.info -t JENKINS-CI 00:01:04.560 [Pipeline] sh 00:01:04.849 + logger -p user.info -t JENKINS-CI 00:01:04.863 [Pipeline] sh 00:01:05.149 + cat autorun-spdk.conf 00:01:05.149 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.149 SPDK_TEST_NVMF=1 00:01:05.149 SPDK_TEST_NVME_CLI=1 00:01:05.149 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.149 SPDK_TEST_NVMF_NICS=e810 00:01:05.149 SPDK_TEST_VFIOUSER=1 00:01:05.149 SPDK_RUN_UBSAN=1 00:01:05.149 NET_TYPE=phy 00:01:05.158 RUN_NIGHTLY=0 00:01:05.164 [Pipeline] readFile 00:01:05.190 [Pipeline] withEnv 00:01:05.192 [Pipeline] { 00:01:05.205 [Pipeline] sh 00:01:05.493 + set -ex 00:01:05.493 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:05.493 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:05.493 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.493 ++ SPDK_TEST_NVMF=1 00:01:05.493 ++ SPDK_TEST_NVME_CLI=1 00:01:05.493 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:05.493 ++ SPDK_TEST_NVMF_NICS=e810 00:01:05.493 ++ SPDK_TEST_VFIOUSER=1 00:01:05.493 ++ SPDK_RUN_UBSAN=1 00:01:05.493 ++ NET_TYPE=phy 00:01:05.493 ++ RUN_NIGHTLY=0 00:01:05.493 + case $SPDK_TEST_NVMF_NICS in 00:01:05.493 + DRIVERS=ice 00:01:05.493 + [[ tcp == \r\d\m\a ]] 00:01:05.493 + [[ -n ice ]] 00:01:05.493 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:05.493 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:05.493 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:05.493 rmmod: ERROR: Module irdma is not currently loaded 00:01:05.493 rmmod: ERROR: Module i40iw is not currently loaded 00:01:05.493 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:05.493 + true 00:01:05.493 + for D in $DRIVERS 00:01:05.493 + sudo modprobe ice 00:01:05.493 + exit 0 00:01:05.504 [Pipeline] } 00:01:05.523 [Pipeline] // withEnv 00:01:05.529 [Pipeline] } 00:01:05.549 [Pipeline] // stage 00:01:05.558 [Pipeline] catchError 00:01:05.560 [Pipeline] { 00:01:05.573 [Pipeline] timeout 00:01:05.573 Timeout set to expire in 50 min 00:01:05.574 [Pipeline] { 00:01:05.588 [Pipeline] stage 00:01:05.590 [Pipeline] { (Tests) 00:01:05.606 [Pipeline] sh 00:01:05.894 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.894 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.894 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.894 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:05.895 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.895 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.895 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:05.895 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.895 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.895 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.895 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:05.895 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.895 + source /etc/os-release 00:01:05.895 ++ NAME='Fedora Linux' 00:01:05.895 ++ VERSION='38 (Cloud Edition)' 00:01:05.895 ++ ID=fedora 00:01:05.895 ++ VERSION_ID=38 00:01:05.895 ++ VERSION_CODENAME= 00:01:05.895 ++ PLATFORM_ID=platform:f38 00:01:05.895 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:05.895 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:05.895 ++ LOGO=fedora-logo-icon 00:01:05.895 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:05.895 ++ HOME_URL=https://fedoraproject.org/ 00:01:05.895 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:05.895 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:05.895 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:05.895 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:05.895 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:05.895 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:05.895 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:05.895 ++ SUPPORT_END=2024-05-14 00:01:05.895 ++ VARIANT='Cloud Edition' 00:01:05.895 ++ VARIANT_ID=cloud 00:01:05.895 + uname -a 00:01:05.895 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:05.895 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:09.199 Hugepages 00:01:09.199 node hugesize free / total 00:01:09.199 node0 1048576kB 0 / 0 00:01:09.199 node0 2048kB 0 / 0 00:01:09.199 node1 1048576kB 0 / 0 00:01:09.199 node1 2048kB 0 / 0 00:01:09.199 00:01:09.199 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:09.199 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:09.199 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:09.199 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:09.199 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:09.199 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:09.199 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:09.199 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:09.199 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:09.199 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:09.199 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:09.199 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:09.199 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:09.199 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:09.199 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:09.199 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:09.199 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:09.199 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:09.199 + rm -f /tmp/spdk-ld-path 00:01:09.199 + source autorun-spdk.conf 00:01:09.199 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.199 ++ SPDK_TEST_NVMF=1 00:01:09.199 ++ SPDK_TEST_NVME_CLI=1 00:01:09.199 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.199 ++ SPDK_TEST_NVMF_NICS=e810 00:01:09.199 ++ SPDK_TEST_VFIOUSER=1 00:01:09.199 ++ SPDK_RUN_UBSAN=1 00:01:09.199 ++ NET_TYPE=phy 00:01:09.199 ++ RUN_NIGHTLY=0 00:01:09.199 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:09.199 + [[ -n '' ]] 00:01:09.199 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.462 + for M in /var/spdk/build-*-manifest.txt 00:01:09.462 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:09.462 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.462 + for M in /var/spdk/build-*-manifest.txt 00:01:09.462 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:09.462 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:09.462 ++ uname 00:01:09.462 + [[ Linux == \L\i\n\u\x ]] 00:01:09.462 + sudo dmesg -T 00:01:09.462 + sudo dmesg --clear 00:01:09.462 + dmesg_pid=105781 00:01:09.462 + [[ Fedora Linux == FreeBSD ]] 00:01:09.462 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.462 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:09.462 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:09.462 + [[ -x /usr/src/fio-static/fio ]] 00:01:09.462 + export FIO_BIN=/usr/src/fio-static/fio 00:01:09.462 + FIO_BIN=/usr/src/fio-static/fio 00:01:09.462 + sudo dmesg -Tw 00:01:09.462 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:09.462 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:09.462 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:09.462 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.462 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:09.462 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:09.462 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.462 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:09.462 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:09.462 Test configuration: 00:01:09.462 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.462 SPDK_TEST_NVMF=1 00:01:09.462 SPDK_TEST_NVME_CLI=1 00:01:09.462 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.462 SPDK_TEST_NVMF_NICS=e810 00:01:09.462 SPDK_TEST_VFIOUSER=1 00:01:09.462 SPDK_RUN_UBSAN=1 00:01:09.462 NET_TYPE=phy 00:01:09.462 RUN_NIGHTLY=0 23:37:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:09.462 23:37:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:09.462 23:37:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:09.462 23:37:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:09.462 23:37:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.462 23:37:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.462 23:37:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.462 23:37:24 -- paths/export.sh@5 -- $ export PATH 00:01:09.462 23:37:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:09.462 23:37:24 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:09.462 23:37:24 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:09.462 23:37:24 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721079444.XXXXXX 00:01:09.462 23:37:24 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721079444.V3wr8c 00:01:09.462 23:37:24 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:09.462 23:37:24 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:09.462 23:37:24 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:09.462 23:37:24 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:09.462 23:37:24 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:09.462 23:37:24 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:09.462 23:37:24 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:01:09.462 23:37:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.462 23:37:24 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:09.462 23:37:24 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:09.462 23:37:24 -- pm/common@17 -- $ local monitor 00:01:09.462 23:37:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.462 23:37:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.462 23:37:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.462 23:37:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:09.462 23:37:24 -- pm/common@21 -- $ date +%s 00:01:09.462 23:37:24 -- pm/common@25 -- $ sleep 1 00:01:09.462 23:37:24 -- pm/common@21 -- $ date +%s 00:01:09.462 23:37:24 -- pm/common@21 -- $ date +%s 00:01:09.462 23:37:24 -- pm/common@21 -- $ date +%s 00:01:09.462 23:37:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721079444 00:01:09.462 23:37:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721079444 00:01:09.462 23:37:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721079444 00:01:09.462 23:37:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721079444 00:01:09.723 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721079444_collect-vmstat.pm.log 00:01:09.723 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721079444_collect-cpu-load.pm.log 00:01:09.723 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721079444_collect-cpu-temp.pm.log 00:01:09.723 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721079444_collect-bmc-pm.bmc.pm.log 00:01:10.667 23:37:25 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:10.667 23:37:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:10.667 23:37:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:10.667 23:37:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.667 23:37:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:10.667 Mon Jul 15 09:37:25 PM UTC 2024 00:01:10.667 23:37:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:10.667 v24.09-pre-210-ga83ad116a 00:01:10.667 23:37:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:10.667 23:37:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:10.667 23:37:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:10.667 23:37:25 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:01:10.667 23:37:25 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:01:10.667 23:37:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.667 ************************************ 00:01:10.667 START TEST ubsan 00:01:10.667 ************************************ 00:01:10.667 23:37:25 ubsan -- common/autotest_common.sh@1117 -- $ echo 'using ubsan' 00:01:10.667 using ubsan 00:01:10.667 00:01:10.667 real 0m0.001s 00:01:10.667 user 0m0.001s 00:01:10.667 sys 0m0.000s 00:01:10.667 23:37:25 ubsan -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:01:10.667 23:37:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:10.667 ************************************ 00:01:10.667 END TEST ubsan 00:01:10.667 ************************************ 00:01:10.667 23:37:25 -- common/autotest_common.sh@1136 -- $ return 0 00:01:10.667 23:37:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:10.667 23:37:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:10.667 23:37:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:10.667 23:37:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:10.667 23:37:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:10.667 23:37:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:10.667 23:37:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:10.667 23:37:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:10.667 23:37:25 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:10.929 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:10.929 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:11.189 Using 'verbs' RDMA provider 00:01:27.036 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:39.270 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:39.270 Creating mk/config.mk...done. 00:01:39.270 Creating mk/cc.flags.mk...done. 00:01:39.270 Type 'make' to build. 00:01:39.270 23:37:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:39.270 23:37:53 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:01:39.270 23:37:53 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:01:39.270 23:37:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.270 ************************************ 00:01:39.270 START TEST make 00:01:39.270 ************************************ 00:01:39.270 23:37:53 make -- common/autotest_common.sh@1117 -- $ make -j144 00:01:39.270 make[1]: Nothing to be done for 'all'. 00:01:40.653 The Meson build system 00:01:40.653 Version: 1.3.1 00:01:40.653 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:40.653 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:40.653 Build type: native build 00:01:40.653 Project name: libvfio-user 00:01:40.653 Project version: 0.0.1 00:01:40.653 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:40.653 C linker for the host machine: cc ld.bfd 2.39-16 00:01:40.653 Host machine cpu family: x86_64 00:01:40.653 Host machine cpu: x86_64 00:01:40.653 Run-time dependency threads found: YES 00:01:40.653 Library dl found: YES 00:01:40.653 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:40.653 Run-time dependency json-c found: YES 0.17 00:01:40.653 Run-time dependency cmocka found: YES 1.1.7 00:01:40.653 Program pytest-3 found: NO 00:01:40.653 Program flake8 found: NO 00:01:40.653 Program misspell-fixer found: NO 00:01:40.653 Program restructuredtext-lint found: NO 00:01:40.653 Program valgrind found: YES (/usr/bin/valgrind) 00:01:40.653 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.653 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.653 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.653 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.653 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:40.654 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:40.654 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.654 Build targets in project: 8 00:01:40.654 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:40.654 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:40.654 00:01:40.654 libvfio-user 0.0.1 00:01:40.654 00:01:40.654 User defined options 00:01:40.654 buildtype : debug 00:01:40.654 default_library: shared 00:01:40.654 libdir : /usr/local/lib 00:01:40.654 00:01:40.654 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.654 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:40.654 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:40.654 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:40.654 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:40.654 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:40.654 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:40.654 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:40.654 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:40.914 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:40.915 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:40.915 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:40.915 [11/37] Compiling C object samples/null.p/null.c.o 00:01:40.915 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:40.915 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:40.915 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:40.915 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:40.915 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:40.915 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:40.915 [18/37] Compiling C object samples/server.p/server.c.o 00:01:40.915 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:40.915 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:40.915 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:40.915 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:40.915 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:40.915 [24/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:40.915 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:40.915 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:40.915 [27/37] Compiling C object samples/client.p/client.c.o 00:01:40.915 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:01:40.915 [29/37] Linking target samples/client 00:01:40.915 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:40.915 [31/37] Linking target test/unit_tests 00:01:41.177 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:41.177 [33/37] Linking target samples/null 00:01:41.177 [34/37] Linking target samples/server 00:01:41.177 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:41.177 [36/37] Linking target samples/lspci 00:01:41.177 [37/37] Linking target samples/gpio-pci-idio-16 00:01:41.177 INFO: autodetecting backend as ninja 00:01:41.177 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.177 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.439 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.439 ninja: no work to do. 00:01:48.035 The Meson build system 00:01:48.035 Version: 1.3.1 00:01:48.035 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:48.035 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:48.035 Build type: native build 00:01:48.035 Program cat found: YES (/usr/bin/cat) 00:01:48.035 Project name: DPDK 00:01:48.035 Project version: 24.03.0 00:01:48.035 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:48.035 C linker for the host machine: cc ld.bfd 2.39-16 00:01:48.035 Host machine cpu family: x86_64 00:01:48.035 Host machine cpu: x86_64 00:01:48.035 Message: ## Building in Developer Mode ## 00:01:48.035 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.035 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.035 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.035 Program python3 found: YES (/usr/bin/python3) 00:01:48.035 Program cat found: YES (/usr/bin/cat) 00:01:48.035 Compiler for C supports arguments -march=native: YES 00:01:48.035 Checking for size of "void *" : 8 00:01:48.035 Checking for size of "void *" : 8 (cached) 00:01:48.035 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:48.035 Library m found: YES 00:01:48.035 Library numa found: YES 00:01:48.036 Has header "numaif.h" : YES 00:01:48.036 Library fdt found: NO 00:01:48.036 Library execinfo found: NO 00:01:48.036 Has header "execinfo.h" : YES 00:01:48.036 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.036 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.036 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.036 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.036 Run-time dependency openssl found: YES 3.0.9 00:01:48.036 Run-time dependency libpcap found: YES 1.10.4 00:01:48.036 Has header "pcap.h" with dependency libpcap: YES 00:01:48.036 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.036 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.036 Compiler for C supports arguments -Wformat: YES 00:01:48.036 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.036 Compiler for C supports arguments -Wformat-security: NO 00:01:48.036 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.036 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.036 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.036 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.036 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.036 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.036 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.036 Compiler for C supports arguments -Wundef: YES 00:01:48.036 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.036 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.036 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.036 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.036 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.036 Program objdump found: YES (/usr/bin/objdump) 00:01:48.036 Compiler for C supports arguments -mavx512f: YES 00:01:48.036 Checking if "AVX512 checking" compiles: YES 00:01:48.036 Fetching value of define "__SSE4_2__" : 1 00:01:48.036 Fetching value of define "__AES__" : 1 00:01:48.036 Fetching value of define "__AVX__" : 1 00:01:48.036 Fetching value of define "__AVX2__" : 1 00:01:48.036 Fetching value of define "__AVX512BW__" : 1 00:01:48.036 Fetching value of define "__AVX512CD__" : 1 00:01:48.036 Fetching value of define "__AVX512DQ__" : 1 00:01:48.036 Fetching value of define "__AVX512F__" : 1 00:01:48.036 Fetching value of define "__AVX512VL__" : 1 00:01:48.036 Fetching value of define "__PCLMUL__" : 1 00:01:48.036 Fetching value of define "__RDRND__" : 1 00:01:48.036 Fetching value of define "__RDSEED__" : 1 00:01:48.036 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:48.036 Fetching value of define "__znver1__" : (undefined) 00:01:48.036 Fetching value of define "__znver2__" : (undefined) 00:01:48.036 Fetching value of define "__znver3__" : (undefined) 00:01:48.036 Fetching value of define "__znver4__" : (undefined) 00:01:48.036 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.036 Message: lib/log: Defining dependency "log" 00:01:48.036 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.036 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.036 Checking for function "getentropy" : NO 00:01:48.036 Message: lib/eal: Defining dependency "eal" 00:01:48.036 Message: lib/ring: Defining dependency "ring" 00:01:48.036 Message: lib/rcu: Defining dependency "rcu" 00:01:48.036 Message: lib/mempool: Defining dependency "mempool" 00:01:48.036 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.036 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.036 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.036 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.036 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.036 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.036 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:48.036 Compiler for C supports arguments -mpclmul: YES 00:01:48.036 Compiler for C supports arguments -maes: YES 00:01:48.036 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.036 Compiler for C supports arguments -mavx512bw: YES 00:01:48.036 Compiler for C supports arguments -mavx512dq: YES 00:01:48.036 Compiler for C supports arguments -mavx512vl: YES 00:01:48.036 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.036 Compiler for C supports arguments -mavx2: YES 00:01:48.036 Compiler for C supports arguments -mavx: YES 00:01:48.036 Message: lib/net: Defining dependency "net" 00:01:48.036 Message: lib/meter: Defining dependency "meter" 00:01:48.036 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.036 Message: lib/pci: Defining dependency "pci" 00:01:48.036 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.036 Message: lib/hash: Defining dependency "hash" 00:01:48.036 Message: lib/timer: Defining dependency "timer" 00:01:48.036 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.036 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.036 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.036 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.036 Message: lib/power: Defining dependency "power" 00:01:48.036 Message: lib/reorder: Defining dependency "reorder" 00:01:48.036 Message: lib/security: Defining dependency "security" 00:01:48.036 Has header "linux/userfaultfd.h" : YES 00:01:48.036 Has header "linux/vduse.h" : YES 00:01:48.036 Message: lib/vhost: Defining dependency "vhost" 00:01:48.036 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.036 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.036 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.036 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.036 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.036 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.036 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.036 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.036 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.036 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.036 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.036 Configuring doxy-api-html.conf using configuration 00:01:48.036 Configuring doxy-api-man.conf using configuration 00:01:48.036 Program mandb found: YES (/usr/bin/mandb) 00:01:48.036 Program sphinx-build found: NO 00:01:48.036 Configuring rte_build_config.h using configuration 00:01:48.036 Message: 00:01:48.036 ================= 00:01:48.036 Applications Enabled 00:01:48.036 ================= 00:01:48.036 00:01:48.036 apps: 00:01:48.036 00:01:48.036 00:01:48.036 Message: 00:01:48.036 ================= 00:01:48.036 Libraries Enabled 00:01:48.036 ================= 00:01:48.036 00:01:48.036 libs: 00:01:48.036 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.036 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.036 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.036 00:01:48.036 Message: 00:01:48.036 =============== 00:01:48.036 Drivers Enabled 00:01:48.036 =============== 00:01:48.036 00:01:48.036 common: 00:01:48.036 00:01:48.036 bus: 00:01:48.036 pci, vdev, 00:01:48.036 mempool: 00:01:48.036 ring, 00:01:48.036 dma: 00:01:48.036 00:01:48.036 net: 00:01:48.036 00:01:48.036 crypto: 00:01:48.036 00:01:48.036 compress: 00:01:48.036 00:01:48.036 vdpa: 00:01:48.036 00:01:48.036 00:01:48.036 Message: 00:01:48.036 ================= 00:01:48.036 Content Skipped 00:01:48.036 ================= 00:01:48.036 00:01:48.036 apps: 00:01:48.036 dumpcap: explicitly disabled via build config 00:01:48.036 graph: explicitly disabled via build config 00:01:48.036 pdump: explicitly disabled via build config 00:01:48.036 proc-info: explicitly disabled via build config 00:01:48.036 test-acl: explicitly disabled via build config 00:01:48.036 test-bbdev: explicitly disabled via build config 00:01:48.036 test-cmdline: explicitly disabled via build config 00:01:48.036 test-compress-perf: explicitly disabled via build config 00:01:48.036 test-crypto-perf: explicitly disabled via build config 00:01:48.036 test-dma-perf: explicitly disabled via build config 00:01:48.036 test-eventdev: explicitly disabled via build config 00:01:48.036 test-fib: explicitly disabled via build config 00:01:48.036 test-flow-perf: explicitly disabled via build config 00:01:48.036 test-gpudev: explicitly disabled via build config 00:01:48.036 test-mldev: explicitly disabled via build config 00:01:48.036 test-pipeline: explicitly disabled via build config 00:01:48.036 test-pmd: explicitly disabled via build config 00:01:48.036 test-regex: explicitly disabled via build config 00:01:48.036 test-sad: explicitly disabled via build config 00:01:48.036 test-security-perf: explicitly disabled via build config 00:01:48.036 00:01:48.036 libs: 00:01:48.036 argparse: explicitly disabled via build config 00:01:48.036 metrics: explicitly disabled via build config 00:01:48.036 acl: explicitly disabled via build config 00:01:48.036 bbdev: explicitly disabled via build config 00:01:48.036 bitratestats: explicitly disabled via build config 00:01:48.036 bpf: explicitly disabled via build config 00:01:48.036 cfgfile: explicitly disabled via build config 00:01:48.036 distributor: explicitly disabled via build config 00:01:48.036 efd: explicitly disabled via build config 00:01:48.036 eventdev: explicitly disabled via build config 00:01:48.036 dispatcher: explicitly disabled via build config 00:01:48.036 gpudev: explicitly disabled via build config 00:01:48.036 gro: explicitly disabled via build config 00:01:48.036 gso: explicitly disabled via build config 00:01:48.036 ip_frag: explicitly disabled via build config 00:01:48.036 jobstats: explicitly disabled via build config 00:01:48.036 latencystats: explicitly disabled via build config 00:01:48.036 lpm: explicitly disabled via build config 00:01:48.037 member: explicitly disabled via build config 00:01:48.037 pcapng: explicitly disabled via build config 00:01:48.037 rawdev: explicitly disabled via build config 00:01:48.037 regexdev: explicitly disabled via build config 00:01:48.037 mldev: explicitly disabled via build config 00:01:48.037 rib: explicitly disabled via build config 00:01:48.037 sched: explicitly disabled via build config 00:01:48.037 stack: explicitly disabled via build config 00:01:48.037 ipsec: explicitly disabled via build config 00:01:48.037 pdcp: explicitly disabled via build config 00:01:48.037 fib: explicitly disabled via build config 00:01:48.037 port: explicitly disabled via build config 00:01:48.037 pdump: explicitly disabled via build config 00:01:48.037 table: explicitly disabled via build config 00:01:48.037 pipeline: explicitly disabled via build config 00:01:48.037 graph: explicitly disabled via build config 00:01:48.037 node: explicitly disabled via build config 00:01:48.037 00:01:48.037 drivers: 00:01:48.037 common/cpt: not in enabled drivers build config 00:01:48.037 common/dpaax: not in enabled drivers build config 00:01:48.037 common/iavf: not in enabled drivers build config 00:01:48.037 common/idpf: not in enabled drivers build config 00:01:48.037 common/ionic: not in enabled drivers build config 00:01:48.037 common/mvep: not in enabled drivers build config 00:01:48.037 common/octeontx: not in enabled drivers build config 00:01:48.037 bus/auxiliary: not in enabled drivers build config 00:01:48.037 bus/cdx: not in enabled drivers build config 00:01:48.037 bus/dpaa: not in enabled drivers build config 00:01:48.037 bus/fslmc: not in enabled drivers build config 00:01:48.037 bus/ifpga: not in enabled drivers build config 00:01:48.037 bus/platform: not in enabled drivers build config 00:01:48.037 bus/uacce: not in enabled drivers build config 00:01:48.037 bus/vmbus: not in enabled drivers build config 00:01:48.037 common/cnxk: not in enabled drivers build config 00:01:48.037 common/mlx5: not in enabled drivers build config 00:01:48.037 common/nfp: not in enabled drivers build config 00:01:48.037 common/nitrox: not in enabled drivers build config 00:01:48.037 common/qat: not in enabled drivers build config 00:01:48.037 common/sfc_efx: not in enabled drivers build config 00:01:48.037 mempool/bucket: not in enabled drivers build config 00:01:48.037 mempool/cnxk: not in enabled drivers build config 00:01:48.037 mempool/dpaa: not in enabled drivers build config 00:01:48.037 mempool/dpaa2: not in enabled drivers build config 00:01:48.037 mempool/octeontx: not in enabled drivers build config 00:01:48.037 mempool/stack: not in enabled drivers build config 00:01:48.037 dma/cnxk: not in enabled drivers build config 00:01:48.037 dma/dpaa: not in enabled drivers build config 00:01:48.037 dma/dpaa2: not in enabled drivers build config 00:01:48.037 dma/hisilicon: not in enabled drivers build config 00:01:48.037 dma/idxd: not in enabled drivers build config 00:01:48.037 dma/ioat: not in enabled drivers build config 00:01:48.037 dma/skeleton: not in enabled drivers build config 00:01:48.037 net/af_packet: not in enabled drivers build config 00:01:48.037 net/af_xdp: not in enabled drivers build config 00:01:48.037 net/ark: not in enabled drivers build config 00:01:48.037 net/atlantic: not in enabled drivers build config 00:01:48.037 net/avp: not in enabled drivers build config 00:01:48.037 net/axgbe: not in enabled drivers build config 00:01:48.037 net/bnx2x: not in enabled drivers build config 00:01:48.037 net/bnxt: not in enabled drivers build config 00:01:48.037 net/bonding: not in enabled drivers build config 00:01:48.037 net/cnxk: not in enabled drivers build config 00:01:48.037 net/cpfl: not in enabled drivers build config 00:01:48.037 net/cxgbe: not in enabled drivers build config 00:01:48.037 net/dpaa: not in enabled drivers build config 00:01:48.037 net/dpaa2: not in enabled drivers build config 00:01:48.037 net/e1000: not in enabled drivers build config 00:01:48.037 net/ena: not in enabled drivers build config 00:01:48.037 net/enetc: not in enabled drivers build config 00:01:48.037 net/enetfec: not in enabled drivers build config 00:01:48.037 net/enic: not in enabled drivers build config 00:01:48.037 net/failsafe: not in enabled drivers build config 00:01:48.037 net/fm10k: not in enabled drivers build config 00:01:48.037 net/gve: not in enabled drivers build config 00:01:48.037 net/hinic: not in enabled drivers build config 00:01:48.037 net/hns3: not in enabled drivers build config 00:01:48.037 net/i40e: not in enabled drivers build config 00:01:48.037 net/iavf: not in enabled drivers build config 00:01:48.037 net/ice: not in enabled drivers build config 00:01:48.037 net/idpf: not in enabled drivers build config 00:01:48.037 net/igc: not in enabled drivers build config 00:01:48.037 net/ionic: not in enabled drivers build config 00:01:48.037 net/ipn3ke: not in enabled drivers build config 00:01:48.037 net/ixgbe: not in enabled drivers build config 00:01:48.037 net/mana: not in enabled drivers build config 00:01:48.037 net/memif: not in enabled drivers build config 00:01:48.037 net/mlx4: not in enabled drivers build config 00:01:48.037 net/mlx5: not in enabled drivers build config 00:01:48.037 net/mvneta: not in enabled drivers build config 00:01:48.037 net/mvpp2: not in enabled drivers build config 00:01:48.037 net/netvsc: not in enabled drivers build config 00:01:48.037 net/nfb: not in enabled drivers build config 00:01:48.037 net/nfp: not in enabled drivers build config 00:01:48.037 net/ngbe: not in enabled drivers build config 00:01:48.037 net/null: not in enabled drivers build config 00:01:48.037 net/octeontx: not in enabled drivers build config 00:01:48.037 net/octeon_ep: not in enabled drivers build config 00:01:48.037 net/pcap: not in enabled drivers build config 00:01:48.037 net/pfe: not in enabled drivers build config 00:01:48.037 net/qede: not in enabled drivers build config 00:01:48.037 net/ring: not in enabled drivers build config 00:01:48.037 net/sfc: not in enabled drivers build config 00:01:48.037 net/softnic: not in enabled drivers build config 00:01:48.037 net/tap: not in enabled drivers build config 00:01:48.037 net/thunderx: not in enabled drivers build config 00:01:48.037 net/txgbe: not in enabled drivers build config 00:01:48.037 net/vdev_netvsc: not in enabled drivers build config 00:01:48.037 net/vhost: not in enabled drivers build config 00:01:48.037 net/virtio: not in enabled drivers build config 00:01:48.037 net/vmxnet3: not in enabled drivers build config 00:01:48.037 raw/*: missing internal dependency, "rawdev" 00:01:48.037 crypto/armv8: not in enabled drivers build config 00:01:48.037 crypto/bcmfs: not in enabled drivers build config 00:01:48.037 crypto/caam_jr: not in enabled drivers build config 00:01:48.037 crypto/ccp: not in enabled drivers build config 00:01:48.037 crypto/cnxk: not in enabled drivers build config 00:01:48.037 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.037 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.037 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.037 crypto/mlx5: not in enabled drivers build config 00:01:48.037 crypto/mvsam: not in enabled drivers build config 00:01:48.037 crypto/nitrox: not in enabled drivers build config 00:01:48.037 crypto/null: not in enabled drivers build config 00:01:48.037 crypto/octeontx: not in enabled drivers build config 00:01:48.037 crypto/openssl: not in enabled drivers build config 00:01:48.037 crypto/scheduler: not in enabled drivers build config 00:01:48.037 crypto/uadk: not in enabled drivers build config 00:01:48.037 crypto/virtio: not in enabled drivers build config 00:01:48.037 compress/isal: not in enabled drivers build config 00:01:48.037 compress/mlx5: not in enabled drivers build config 00:01:48.037 compress/nitrox: not in enabled drivers build config 00:01:48.037 compress/octeontx: not in enabled drivers build config 00:01:48.037 compress/zlib: not in enabled drivers build config 00:01:48.037 regex/*: missing internal dependency, "regexdev" 00:01:48.037 ml/*: missing internal dependency, "mldev" 00:01:48.037 vdpa/ifc: not in enabled drivers build config 00:01:48.037 vdpa/mlx5: not in enabled drivers build config 00:01:48.037 vdpa/nfp: not in enabled drivers build config 00:01:48.037 vdpa/sfc: not in enabled drivers build config 00:01:48.037 event/*: missing internal dependency, "eventdev" 00:01:48.037 baseband/*: missing internal dependency, "bbdev" 00:01:48.037 gpu/*: missing internal dependency, "gpudev" 00:01:48.037 00:01:48.037 00:01:48.037 Build targets in project: 84 00:01:48.037 00:01:48.037 DPDK 24.03.0 00:01:48.037 00:01:48.037 User defined options 00:01:48.037 buildtype : debug 00:01:48.037 default_library : shared 00:01:48.037 libdir : lib 00:01:48.037 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:48.037 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:48.037 c_link_args : 00:01:48.037 cpu_instruction_set: native 00:01:48.037 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:48.037 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:48.037 enable_docs : false 00:01:48.037 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.037 enable_kmods : false 00:01:48.037 max_lcores : 128 00:01:48.037 tests : false 00:01:48.037 00:01:48.037 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.037 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:48.037 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.037 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.037 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.037 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.037 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.037 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.037 [7/267] Linking static target lib/librte_kvargs.a 00:01:48.037 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.037 [9/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.037 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.037 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.037 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.037 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.037 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.037 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.037 [16/267] Linking static target lib/librte_log.a 00:01:48.037 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.301 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.301 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.301 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.301 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.301 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.301 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.301 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.301 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.301 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.301 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.301 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.301 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.301 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.301 [31/267] Linking static target lib/librte_pci.a 00:01:48.301 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.301 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.301 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.301 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.301 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.301 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.301 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.560 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.560 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.561 [41/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.561 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.561 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.561 [44/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.561 [45/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.561 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.561 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.561 [48/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.561 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.561 [50/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:48.561 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.561 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.561 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.561 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.561 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.561 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.561 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.561 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.561 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.561 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.561 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.561 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.561 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.561 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.561 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.561 [66/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.561 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.561 [68/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:48.561 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.561 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.561 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.561 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.561 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.561 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.561 [75/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.561 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.561 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.561 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.561 [79/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.561 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.561 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.561 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.561 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.561 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.561 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.561 [86/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.561 [87/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.561 [88/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.561 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.561 [90/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.561 [91/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.561 [92/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:48.561 [93/267] Linking static target lib/librte_cmdline.a 00:01:48.561 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.561 [95/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:48.821 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.821 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.821 [98/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.821 [99/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.821 [100/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.821 [101/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.821 [102/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.821 [103/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.821 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.821 [105/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.821 [106/267] Linking static target lib/librte_ring.a 00:01:48.821 [107/267] Linking static target lib/librte_meter.a 00:01:48.821 [108/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.821 [109/267] Linking static target lib/librte_telemetry.a 00:01:48.821 [110/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.821 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.821 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.821 [113/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.821 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.821 [115/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.821 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.821 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.821 [118/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.821 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:48.821 [120/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.821 [121/267] Linking static target lib/librte_timer.a 00:01:48.821 [122/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.821 [123/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.821 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.821 [125/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.821 [126/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.821 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.821 [128/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.821 [129/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.821 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.821 [131/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.821 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.821 [133/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.821 [134/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.821 [135/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.821 [136/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.821 [137/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.821 [138/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.821 [139/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.821 [140/267] Linking static target lib/librte_dmadev.a 00:01:48.821 [141/267] Linking static target lib/librte_net.a 00:01:48.821 [142/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.821 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.821 [144/267] Linking static target lib/librte_mempool.a 00:01:48.821 [145/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.821 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.821 [147/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.821 [148/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.821 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.821 [150/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.821 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:48.821 [152/267] Linking static target lib/librte_compressdev.a 00:01:48.821 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.821 [154/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.821 [155/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.821 [156/267] Linking static target lib/librte_mbuf.a 00:01:48.821 [157/267] Linking target lib/librte_log.so.24.1 00:01:48.821 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.821 [159/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.821 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:48.821 [161/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.821 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.821 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.821 [164/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.821 [165/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.821 [166/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.821 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.821 [168/267] Linking static target lib/librte_rcu.a 00:01:48.821 [169/267] Linking static target lib/librte_reorder.a 00:01:48.821 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.821 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.821 [172/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.821 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.821 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.821 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.821 [176/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.821 [177/267] Linking static target lib/librte_power.a 00:01:48.821 [178/267] Linking static target lib/librte_eal.a 00:01:48.821 [179/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.821 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.821 [181/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.821 [182/267] Linking static target lib/librte_security.a 00:01:48.821 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.083 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:49.083 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:49.083 [186/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.083 [187/267] Linking target lib/librte_kvargs.so.24.1 00:01:49.083 [188/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.083 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:49.083 [190/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:49.083 [191/267] Linking static target drivers/librte_bus_vdev.a 00:01:49.083 [192/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.083 [193/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:49.083 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.083 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:49.083 [196/267] Linking static target lib/librte_hash.a 00:01:49.083 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.083 [198/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.083 [199/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.083 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:49.083 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.083 [202/267] Linking static target drivers/librte_mempool_ring.a 00:01:49.083 [203/267] Linking static target drivers/librte_bus_pci.a 00:01:49.083 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:49.083 [205/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:49.083 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:49.083 [207/267] Linking static target lib/librte_cryptodev.a 00:01:49.083 [208/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.343 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.343 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.343 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:49.343 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.343 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.605 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:49.605 [215/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.605 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.605 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.605 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.605 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.605 [220/267] Linking static target lib/librte_ethdev.a 00:01:49.605 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.866 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.866 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.866 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.867 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.127 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.698 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.698 [228/267] Linking static target lib/librte_vhost.a 00:01:51.267 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.650 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.237 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.623 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.623 [233/267] Linking target lib/librte_eal.so.24.1 00:02:00.623 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:00.623 [235/267] Linking target lib/librte_pci.so.24.1 00:02:00.623 [236/267] Linking target lib/librte_ring.so.24.1 00:02:00.623 [237/267] Linking target lib/librte_timer.so.24.1 00:02:00.623 [238/267] Linking target lib/librte_meter.so.24.1 00:02:00.623 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:00.623 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:00.884 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:00.884 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:00.884 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:00.884 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:00.884 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:00.884 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:00.884 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:00.884 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:00.884 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:00.884 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:01.145 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:01.145 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:01.145 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:01.145 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:01.145 [255/267] Linking target lib/librte_net.so.24.1 00:02:01.145 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:01.145 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:01.407 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:01.407 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:01.407 [260/267] Linking target lib/librte_security.so.24.1 00:02:01.407 [261/267] Linking target lib/librte_hash.so.24.1 00:02:01.407 [262/267] Linking target lib/librte_cmdline.so.24.1 00:02:01.407 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:01.668 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:01.668 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:01.668 [266/267] Linking target lib/librte_power.so.24.1 00:02:01.668 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:01.668 INFO: autodetecting backend as ninja 00:02:01.668 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:02.719 CC lib/ut/ut.o 00:02:02.719 CC lib/ut_mock/mock.o 00:02:02.719 CC lib/log/log.o 00:02:02.719 CC lib/log/log_flags.o 00:02:02.719 CC lib/log/log_deprecated.o 00:02:02.981 LIB libspdk_ut.a 00:02:02.981 LIB libspdk_ut_mock.a 00:02:02.981 LIB libspdk_log.a 00:02:02.981 SO libspdk_ut_mock.so.6.0 00:02:02.981 SO libspdk_ut.so.2.0 00:02:02.981 SO libspdk_log.so.7.0 00:02:02.981 SYMLINK libspdk_ut_mock.so 00:02:02.981 SYMLINK libspdk_ut.so 00:02:02.981 SYMLINK libspdk_log.so 00:02:03.553 CC lib/dma/dma.o 00:02:03.553 CC lib/ioat/ioat.o 00:02:03.553 CC lib/util/base64.o 00:02:03.553 CC lib/util/bit_array.o 00:02:03.553 CXX lib/trace_parser/trace.o 00:02:03.553 CC lib/util/cpuset.o 00:02:03.553 CC lib/util/crc16.o 00:02:03.553 CC lib/util/crc32.o 00:02:03.553 CC lib/util/crc32c.o 00:02:03.553 CC lib/util/crc64.o 00:02:03.553 CC lib/util/crc32_ieee.o 00:02:03.553 CC lib/util/dif.o 00:02:03.553 CC lib/util/fd.o 00:02:03.553 CC lib/util/file.o 00:02:03.553 CC lib/util/hexlify.o 00:02:03.553 CC lib/util/iov.o 00:02:03.553 CC lib/util/math.o 00:02:03.553 CC lib/util/pipe.o 00:02:03.553 CC lib/util/strerror_tls.o 00:02:03.553 CC lib/util/string.o 00:02:03.553 CC lib/util/uuid.o 00:02:03.553 CC lib/util/fd_group.o 00:02:03.553 CC lib/util/xor.o 00:02:03.553 CC lib/util/zipf.o 00:02:03.553 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.553 CC lib/vfio_user/host/vfio_user.o 00:02:03.553 LIB libspdk_dma.a 00:02:03.553 SO libspdk_dma.so.4.0 00:02:03.814 LIB libspdk_ioat.a 00:02:03.814 SYMLINK libspdk_dma.so 00:02:03.814 SO libspdk_ioat.so.7.0 00:02:03.814 SYMLINK libspdk_ioat.so 00:02:03.814 LIB libspdk_vfio_user.a 00:02:03.814 SO libspdk_vfio_user.so.5.0 00:02:03.814 LIB libspdk_util.a 00:02:04.075 SYMLINK libspdk_vfio_user.so 00:02:04.075 SO libspdk_util.so.9.1 00:02:04.075 SYMLINK libspdk_util.so 00:02:04.336 LIB libspdk_trace_parser.a 00:02:04.336 SO libspdk_trace_parser.so.5.0 00:02:04.336 SYMLINK libspdk_trace_parser.so 00:02:04.598 CC lib/conf/conf.o 00:02:04.598 CC lib/vmd/vmd.o 00:02:04.598 CC lib/env_dpdk/env.o 00:02:04.598 CC lib/vmd/led.o 00:02:04.598 CC lib/env_dpdk/memory.o 00:02:04.598 CC lib/env_dpdk/pci.o 00:02:04.598 CC lib/rdma_utils/rdma_utils.o 00:02:04.598 CC lib/env_dpdk/init.o 00:02:04.598 CC lib/env_dpdk/threads.o 00:02:04.598 CC lib/env_dpdk/pci_ioat.o 00:02:04.598 CC lib/env_dpdk/pci_virtio.o 00:02:04.598 CC lib/env_dpdk/pci_vmd.o 00:02:04.598 CC lib/env_dpdk/pci_idxd.o 00:02:04.598 CC lib/env_dpdk/pci_event.o 00:02:04.598 CC lib/env_dpdk/sigbus_handler.o 00:02:04.598 CC lib/idxd/idxd.o 00:02:04.598 CC lib/env_dpdk/pci_dpdk.o 00:02:04.598 CC lib/rdma_provider/common.o 00:02:04.598 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.598 CC lib/idxd/idxd_user.o 00:02:04.598 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:04.598 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.598 CC lib/idxd/idxd_kernel.o 00:02:04.598 CC lib/json/json_parse.o 00:02:04.598 CC lib/json/json_util.o 00:02:04.598 CC lib/json/json_write.o 00:02:04.598 LIB libspdk_rdma_provider.a 00:02:04.859 LIB libspdk_conf.a 00:02:04.859 SO libspdk_rdma_provider.so.6.0 00:02:04.859 SO libspdk_conf.so.6.0 00:02:04.859 LIB libspdk_rdma_utils.a 00:02:04.859 LIB libspdk_json.a 00:02:04.859 SYMLINK libspdk_rdma_provider.so 00:02:04.859 SO libspdk_rdma_utils.so.1.0 00:02:04.859 SYMLINK libspdk_conf.so 00:02:04.859 SO libspdk_json.so.6.0 00:02:04.859 SYMLINK libspdk_rdma_utils.so 00:02:04.859 SYMLINK libspdk_json.so 00:02:04.859 LIB libspdk_idxd.a 00:02:05.121 SO libspdk_idxd.so.12.0 00:02:05.121 LIB libspdk_vmd.a 00:02:05.121 SYMLINK libspdk_idxd.so 00:02:05.121 SO libspdk_vmd.so.6.0 00:02:05.121 SYMLINK libspdk_vmd.so 00:02:05.382 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.382 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.382 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.382 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.382 LIB libspdk_jsonrpc.a 00:02:05.644 SO libspdk_jsonrpc.so.6.0 00:02:05.644 SYMLINK libspdk_jsonrpc.so 00:02:05.644 LIB libspdk_env_dpdk.a 00:02:05.644 SO libspdk_env_dpdk.so.14.1 00:02:05.905 SYMLINK libspdk_env_dpdk.so 00:02:05.905 CC lib/rpc/rpc.o 00:02:06.165 LIB libspdk_rpc.a 00:02:06.165 SO libspdk_rpc.so.6.0 00:02:06.426 SYMLINK libspdk_rpc.so 00:02:06.686 CC lib/keyring/keyring.o 00:02:06.686 CC lib/keyring/keyring_rpc.o 00:02:06.686 CC lib/notify/notify.o 00:02:06.686 CC lib/notify/notify_rpc.o 00:02:06.686 CC lib/trace/trace.o 00:02:06.686 CC lib/trace/trace_flags.o 00:02:06.686 CC lib/trace/trace_rpc.o 00:02:06.946 LIB libspdk_notify.a 00:02:06.946 LIB libspdk_keyring.a 00:02:06.946 SO libspdk_notify.so.6.0 00:02:06.946 LIB libspdk_trace.a 00:02:06.946 SO libspdk_keyring.so.1.0 00:02:06.946 SYMLINK libspdk_notify.so 00:02:06.946 SO libspdk_trace.so.10.0 00:02:06.946 SYMLINK libspdk_keyring.so 00:02:06.946 SYMLINK libspdk_trace.so 00:02:07.518 CC lib/thread/thread.o 00:02:07.518 CC lib/thread/iobuf.o 00:02:07.518 CC lib/sock/sock.o 00:02:07.518 CC lib/sock/sock_rpc.o 00:02:07.779 LIB libspdk_sock.a 00:02:07.779 SO libspdk_sock.so.10.0 00:02:07.779 SYMLINK libspdk_sock.so 00:02:08.351 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.351 CC lib/nvme/nvme_ctrlr.o 00:02:08.351 CC lib/nvme/nvme_ns_cmd.o 00:02:08.351 CC lib/nvme/nvme_fabric.o 00:02:08.351 CC lib/nvme/nvme_ns.o 00:02:08.351 CC lib/nvme/nvme_pcie_common.o 00:02:08.351 CC lib/nvme/nvme_pcie.o 00:02:08.351 CC lib/nvme/nvme_qpair.o 00:02:08.351 CC lib/nvme/nvme.o 00:02:08.351 CC lib/nvme/nvme_quirks.o 00:02:08.351 CC lib/nvme/nvme_transport.o 00:02:08.351 CC lib/nvme/nvme_discovery.o 00:02:08.351 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.351 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.351 CC lib/nvme/nvme_tcp.o 00:02:08.351 CC lib/nvme/nvme_opal.o 00:02:08.351 CC lib/nvme/nvme_io_msg.o 00:02:08.351 CC lib/nvme/nvme_poll_group.o 00:02:08.351 CC lib/nvme/nvme_zns.o 00:02:08.351 CC lib/nvme/nvme_stubs.o 00:02:08.351 CC lib/nvme/nvme_auth.o 00:02:08.351 CC lib/nvme/nvme_cuse.o 00:02:08.351 CC lib/nvme/nvme_vfio_user.o 00:02:08.351 CC lib/nvme/nvme_rdma.o 00:02:08.612 LIB libspdk_thread.a 00:02:08.612 SO libspdk_thread.so.10.1 00:02:08.872 SYMLINK libspdk_thread.so 00:02:09.132 CC lib/virtio/virtio.o 00:02:09.132 CC lib/virtio/virtio_vfio_user.o 00:02:09.132 CC lib/virtio/virtio_vhost_user.o 00:02:09.132 CC lib/virtio/virtio_pci.o 00:02:09.132 CC lib/vfu_tgt/tgt_endpoint.o 00:02:09.132 CC lib/vfu_tgt/tgt_rpc.o 00:02:09.132 CC lib/init/json_config.o 00:02:09.132 CC lib/init/subsystem_rpc.o 00:02:09.132 CC lib/init/subsystem.o 00:02:09.132 CC lib/init/rpc.o 00:02:09.132 CC lib/accel/accel.o 00:02:09.132 CC lib/accel/accel_rpc.o 00:02:09.132 CC lib/blob/blobstore.o 00:02:09.132 CC lib/accel/accel_sw.o 00:02:09.132 CC lib/blob/request.o 00:02:09.132 CC lib/blob/zeroes.o 00:02:09.132 CC lib/blob/blob_bs_dev.o 00:02:09.392 LIB libspdk_init.a 00:02:09.392 LIB libspdk_virtio.a 00:02:09.392 SO libspdk_init.so.5.0 00:02:09.392 LIB libspdk_vfu_tgt.a 00:02:09.392 SO libspdk_virtio.so.7.0 00:02:09.392 SO libspdk_vfu_tgt.so.3.0 00:02:09.392 SYMLINK libspdk_init.so 00:02:09.653 SYMLINK libspdk_vfu_tgt.so 00:02:09.653 SYMLINK libspdk_virtio.so 00:02:09.912 CC lib/event/app.o 00:02:09.912 CC lib/event/reactor.o 00:02:09.912 CC lib/event/log_rpc.o 00:02:09.912 CC lib/event/app_rpc.o 00:02:09.912 CC lib/event/scheduler_static.o 00:02:09.912 LIB libspdk_accel.a 00:02:09.912 SO libspdk_accel.so.15.1 00:02:10.223 SYMLINK libspdk_accel.so 00:02:10.223 LIB libspdk_nvme.a 00:02:10.223 LIB libspdk_event.a 00:02:10.223 SO libspdk_nvme.so.13.1 00:02:10.223 SO libspdk_event.so.14.0 00:02:10.483 SYMLINK libspdk_event.so 00:02:10.483 CC lib/bdev/bdev.o 00:02:10.483 CC lib/bdev/bdev_rpc.o 00:02:10.483 CC lib/bdev/bdev_zone.o 00:02:10.483 CC lib/bdev/part.o 00:02:10.483 CC lib/bdev/scsi_nvme.o 00:02:10.483 SYMLINK libspdk_nvme.so 00:02:11.867 LIB libspdk_blob.a 00:02:11.867 SO libspdk_blob.so.11.0 00:02:11.867 SYMLINK libspdk_blob.so 00:02:12.128 CC lib/blobfs/blobfs.o 00:02:12.128 CC lib/blobfs/tree.o 00:02:12.128 CC lib/lvol/lvol.o 00:02:12.702 LIB libspdk_bdev.a 00:02:12.702 SO libspdk_bdev.so.15.1 00:02:12.702 SYMLINK libspdk_bdev.so 00:02:12.963 LIB libspdk_blobfs.a 00:02:12.963 SO libspdk_blobfs.so.10.0 00:02:12.963 LIB libspdk_lvol.a 00:02:12.963 SYMLINK libspdk_blobfs.so 00:02:12.963 SO libspdk_lvol.so.10.0 00:02:12.963 SYMLINK libspdk_lvol.so 00:02:13.222 CC lib/nvmf/ctrlr.o 00:02:13.222 CC lib/nvmf/ctrlr_discovery.o 00:02:13.222 CC lib/ftl/ftl_core.o 00:02:13.222 CC lib/nvmf/ctrlr_bdev.o 00:02:13.222 CC lib/ftl/ftl_init.o 00:02:13.222 CC lib/nvmf/subsystem.o 00:02:13.222 CC lib/ftl/ftl_layout.o 00:02:13.222 CC lib/nvmf/nvmf.o 00:02:13.222 CC lib/nvmf/nvmf_rpc.o 00:02:13.222 CC lib/ftl/ftl_debug.o 00:02:13.222 CC lib/scsi/dev.o 00:02:13.222 CC lib/ftl/ftl_io.o 00:02:13.222 CC lib/nvmf/transport.o 00:02:13.222 CC lib/scsi/lun.o 00:02:13.222 CC lib/ftl/ftl_sb.o 00:02:13.222 CC lib/nvmf/tcp.o 00:02:13.222 CC lib/nbd/nbd.o 00:02:13.222 CC lib/scsi/port.o 00:02:13.222 CC lib/nbd/nbd_rpc.o 00:02:13.222 CC lib/ftl/ftl_l2p_flat.o 00:02:13.222 CC lib/scsi/scsi.o 00:02:13.222 CC lib/nvmf/mdns_server.o 00:02:13.222 CC lib/ftl/ftl_l2p.o 00:02:13.222 CC lib/nvmf/stubs.o 00:02:13.222 CC lib/ublk/ublk.o 00:02:13.222 CC lib/ftl/ftl_nv_cache.o 00:02:13.222 CC lib/nvmf/vfio_user.o 00:02:13.222 CC lib/scsi/scsi_bdev.o 00:02:13.222 CC lib/ftl/ftl_band.o 00:02:13.222 CC lib/ublk/ublk_rpc.o 00:02:13.222 CC lib/nvmf/rdma.o 00:02:13.222 CC lib/scsi/scsi_pr.o 00:02:13.222 CC lib/ftl/ftl_band_ops.o 00:02:13.223 CC lib/nvmf/auth.o 00:02:13.223 CC lib/ftl/ftl_writer.o 00:02:13.223 CC lib/scsi/scsi_rpc.o 00:02:13.223 CC lib/ftl/ftl_rq.o 00:02:13.223 CC lib/scsi/task.o 00:02:13.223 CC lib/ftl/ftl_reloc.o 00:02:13.223 CC lib/ftl/ftl_l2p_cache.o 00:02:13.223 CC lib/ftl/ftl_p2l.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.223 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.223 CC lib/ftl/utils/ftl_md.o 00:02:13.223 CC lib/ftl/utils/ftl_conf.o 00:02:13.223 CC lib/ftl/utils/ftl_mempool.o 00:02:13.223 CC lib/ftl/utils/ftl_property.o 00:02:13.223 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.223 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.223 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.223 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.223 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.223 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.223 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.223 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:13.223 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.223 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.223 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.223 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.223 CC lib/ftl/base/ftl_base_dev.o 00:02:13.223 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.223 CC lib/ftl/ftl_trace.o 00:02:13.793 LIB libspdk_nbd.a 00:02:13.793 SO libspdk_nbd.so.7.0 00:02:13.793 LIB libspdk_scsi.a 00:02:13.793 SYMLINK libspdk_nbd.so 00:02:13.793 SO libspdk_scsi.so.9.0 00:02:13.793 LIB libspdk_ublk.a 00:02:13.793 SO libspdk_ublk.so.3.0 00:02:13.793 SYMLINK libspdk_scsi.so 00:02:14.056 SYMLINK libspdk_ublk.so 00:02:14.056 LIB libspdk_ftl.a 00:02:14.316 CC lib/iscsi/conn.o 00:02:14.316 CC lib/iscsi/init_grp.o 00:02:14.316 CC lib/iscsi/iscsi.o 00:02:14.316 CC lib/iscsi/param.o 00:02:14.316 CC lib/iscsi/md5.o 00:02:14.316 CC lib/iscsi/iscsi_subsystem.o 00:02:14.316 CC lib/iscsi/portal_grp.o 00:02:14.316 CC lib/iscsi/tgt_node.o 00:02:14.316 CC lib/iscsi/iscsi_rpc.o 00:02:14.316 CC lib/iscsi/task.o 00:02:14.316 CC lib/vhost/vhost.o 00:02:14.316 CC lib/vhost/vhost_rpc.o 00:02:14.316 CC lib/vhost/vhost_scsi.o 00:02:14.316 CC lib/vhost/vhost_blk.o 00:02:14.316 CC lib/vhost/rte_vhost_user.o 00:02:14.316 SO libspdk_ftl.so.9.0 00:02:14.578 SYMLINK libspdk_ftl.so 00:02:15.148 LIB libspdk_nvmf.a 00:02:15.148 SO libspdk_nvmf.so.19.0 00:02:15.148 LIB libspdk_vhost.a 00:02:15.148 SO libspdk_vhost.so.8.0 00:02:15.410 SYMLINK libspdk_nvmf.so 00:02:15.410 SYMLINK libspdk_vhost.so 00:02:15.410 LIB libspdk_iscsi.a 00:02:15.410 SO libspdk_iscsi.so.8.0 00:02:15.671 SYMLINK libspdk_iscsi.so 00:02:16.243 CC module/vfu_device/vfu_virtio.o 00:02:16.243 CC module/vfu_device/vfu_virtio_blk.o 00:02:16.243 CC module/vfu_device/vfu_virtio_scsi.o 00:02:16.243 CC module/vfu_device/vfu_virtio_rpc.o 00:02:16.243 CC module/env_dpdk/env_dpdk_rpc.o 00:02:16.243 CC module/accel/dsa/accel_dsa.o 00:02:16.243 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.243 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.243 LIB libspdk_env_dpdk_rpc.a 00:02:16.243 CC module/accel/error/accel_error.o 00:02:16.243 CC module/accel/error/accel_error_rpc.o 00:02:16.243 CC module/sock/posix/posix.o 00:02:16.503 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:16.503 CC module/blob/bdev/blob_bdev.o 00:02:16.503 CC module/accel/ioat/accel_ioat.o 00:02:16.503 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.503 CC module/keyring/linux/keyring.o 00:02:16.503 CC module/keyring/file/keyring.o 00:02:16.503 CC module/keyring/linux/keyring_rpc.o 00:02:16.503 CC module/keyring/file/keyring_rpc.o 00:02:16.503 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.503 CC module/accel/iaa/accel_iaa.o 00:02:16.503 CC module/accel/iaa/accel_iaa_rpc.o 00:02:16.503 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.503 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.503 LIB libspdk_keyring_linux.a 00:02:16.503 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.503 LIB libspdk_accel_error.a 00:02:16.503 LIB libspdk_keyring_file.a 00:02:16.503 LIB libspdk_scheduler_gscheduler.a 00:02:16.503 SO libspdk_keyring_linux.so.1.0 00:02:16.503 SO libspdk_accel_error.so.2.0 00:02:16.503 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:16.503 SO libspdk_keyring_file.so.1.0 00:02:16.503 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.503 LIB libspdk_accel_ioat.a 00:02:16.503 LIB libspdk_scheduler_dynamic.a 00:02:16.503 LIB libspdk_accel_dsa.a 00:02:16.503 LIB libspdk_accel_iaa.a 00:02:16.503 SO libspdk_accel_dsa.so.5.0 00:02:16.764 LIB libspdk_blob_bdev.a 00:02:16.764 SO libspdk_accel_ioat.so.6.0 00:02:16.764 SO libspdk_scheduler_dynamic.so.4.0 00:02:16.764 SYMLINK libspdk_scheduler_gscheduler.so 00:02:16.764 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:16.764 SYMLINK libspdk_keyring_linux.so 00:02:16.764 SYMLINK libspdk_accel_error.so 00:02:16.764 SO libspdk_accel_iaa.so.3.0 00:02:16.764 SYMLINK libspdk_keyring_file.so 00:02:16.764 SO libspdk_blob_bdev.so.11.0 00:02:16.764 SYMLINK libspdk_accel_dsa.so 00:02:16.764 SYMLINK libspdk_accel_ioat.so 00:02:16.764 SYMLINK libspdk_scheduler_dynamic.so 00:02:16.764 SYMLINK libspdk_accel_iaa.so 00:02:16.764 LIB libspdk_vfu_device.a 00:02:16.764 SYMLINK libspdk_blob_bdev.so 00:02:16.764 SO libspdk_vfu_device.so.3.0 00:02:16.764 SYMLINK libspdk_vfu_device.so 00:02:17.026 LIB libspdk_sock_posix.a 00:02:17.026 SO libspdk_sock_posix.so.6.0 00:02:17.287 SYMLINK libspdk_sock_posix.so 00:02:17.287 CC module/blobfs/bdev/blobfs_bdev.o 00:02:17.287 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:17.287 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:17.287 CC module/bdev/delay/vbdev_delay.o 00:02:17.287 CC module/bdev/null/bdev_null.o 00:02:17.287 CC module/bdev/null/bdev_null_rpc.o 00:02:17.287 CC module/bdev/lvol/vbdev_lvol.o 00:02:17.287 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:17.287 CC module/bdev/aio/bdev_aio.o 00:02:17.287 CC module/bdev/aio/bdev_aio_rpc.o 00:02:17.287 CC module/bdev/iscsi/bdev_iscsi.o 00:02:17.287 CC module/bdev/error/vbdev_error.o 00:02:17.287 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:17.287 CC module/bdev/error/vbdev_error_rpc.o 00:02:17.287 CC module/bdev/nvme/bdev_nvme.o 00:02:17.287 CC module/bdev/malloc/bdev_malloc.o 00:02:17.287 CC module/bdev/gpt/gpt.o 00:02:17.287 CC module/bdev/raid/bdev_raid.o 00:02:17.287 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:17.287 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:17.287 CC module/bdev/gpt/vbdev_gpt.o 00:02:17.287 CC module/bdev/nvme/nvme_rpc.o 00:02:17.287 CC module/bdev/raid/bdev_raid_rpc.o 00:02:17.287 CC module/bdev/nvme/bdev_mdns_client.o 00:02:17.287 CC module/bdev/raid/bdev_raid_sb.o 00:02:17.287 CC module/bdev/split/vbdev_split.o 00:02:17.287 CC module/bdev/nvme/vbdev_opal.o 00:02:17.287 CC module/bdev/raid/raid0.o 00:02:17.287 CC module/bdev/split/vbdev_split_rpc.o 00:02:17.287 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:17.287 CC module/bdev/raid/raid1.o 00:02:17.287 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:17.287 CC module/bdev/raid/concat.o 00:02:17.287 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:17.287 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:17.287 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:17.287 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:17.287 CC module/bdev/passthru/vbdev_passthru.o 00:02:17.287 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:17.287 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:17.287 CC module/bdev/ftl/bdev_ftl.o 00:02:17.287 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.548 LIB libspdk_blobfs_bdev.a 00:02:17.548 SO libspdk_blobfs_bdev.so.6.0 00:02:17.548 LIB libspdk_bdev_null.a 00:02:17.548 LIB libspdk_bdev_split.a 00:02:17.548 LIB libspdk_bdev_error.a 00:02:17.548 SO libspdk_bdev_null.so.6.0 00:02:17.548 LIB libspdk_bdev_gpt.a 00:02:17.548 SO libspdk_bdev_split.so.6.0 00:02:17.548 SYMLINK libspdk_blobfs_bdev.so 00:02:17.548 SO libspdk_bdev_error.so.6.0 00:02:17.548 LIB libspdk_bdev_aio.a 00:02:17.548 SO libspdk_bdev_gpt.so.6.0 00:02:17.548 LIB libspdk_bdev_ftl.a 00:02:17.548 LIB libspdk_bdev_passthru.a 00:02:17.809 LIB libspdk_bdev_delay.a 00:02:17.809 SYMLINK libspdk_bdev_null.so 00:02:17.809 SYMLINK libspdk_bdev_split.so 00:02:17.809 SO libspdk_bdev_aio.so.6.0 00:02:17.809 SYMLINK libspdk_bdev_error.so 00:02:17.809 LIB libspdk_bdev_zone_block.a 00:02:17.809 LIB libspdk_bdev_malloc.a 00:02:17.809 SO libspdk_bdev_passthru.so.6.0 00:02:17.809 LIB libspdk_bdev_iscsi.a 00:02:17.809 SO libspdk_bdev_ftl.so.6.0 00:02:17.809 SO libspdk_bdev_delay.so.6.0 00:02:17.809 SYMLINK libspdk_bdev_gpt.so 00:02:17.809 SO libspdk_bdev_malloc.so.6.0 00:02:17.809 SO libspdk_bdev_zone_block.so.6.0 00:02:17.810 SO libspdk_bdev_iscsi.so.6.0 00:02:17.810 SYMLINK libspdk_bdev_aio.so 00:02:17.810 SYMLINK libspdk_bdev_passthru.so 00:02:17.810 SYMLINK libspdk_bdev_ftl.so 00:02:17.810 SYMLINK libspdk_bdev_delay.so 00:02:17.810 LIB libspdk_bdev_lvol.a 00:02:17.810 SYMLINK libspdk_bdev_malloc.so 00:02:17.810 LIB libspdk_bdev_virtio.a 00:02:17.810 SYMLINK libspdk_bdev_zone_block.so 00:02:17.810 SYMLINK libspdk_bdev_iscsi.so 00:02:17.810 SO libspdk_bdev_lvol.so.6.0 00:02:17.810 SO libspdk_bdev_virtio.so.6.0 00:02:18.071 SYMLINK libspdk_bdev_lvol.so 00:02:18.071 SYMLINK libspdk_bdev_virtio.so 00:02:18.332 LIB libspdk_bdev_raid.a 00:02:18.332 SO libspdk_bdev_raid.so.6.0 00:02:18.332 SYMLINK libspdk_bdev_raid.so 00:02:19.274 LIB libspdk_bdev_nvme.a 00:02:19.274 SO libspdk_bdev_nvme.so.7.0 00:02:19.535 SYMLINK libspdk_bdev_nvme.so 00:02:20.107 CC module/event/subsystems/vmd/vmd.o 00:02:20.107 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.107 CC module/event/subsystems/keyring/keyring.o 00:02:20.107 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.107 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.107 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:20.107 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.107 CC module/event/subsystems/sock/sock.o 00:02:20.107 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.368 LIB libspdk_event_vmd.a 00:02:20.368 LIB libspdk_event_keyring.a 00:02:20.368 LIB libspdk_event_scheduler.a 00:02:20.368 LIB libspdk_event_vhost_blk.a 00:02:20.368 LIB libspdk_event_vfu_tgt.a 00:02:20.368 LIB libspdk_event_sock.a 00:02:20.368 SO libspdk_event_vmd.so.6.0 00:02:20.368 SO libspdk_event_keyring.so.1.0 00:02:20.368 LIB libspdk_event_iobuf.a 00:02:20.368 SO libspdk_event_scheduler.so.4.0 00:02:20.368 SO libspdk_event_vhost_blk.so.3.0 00:02:20.368 SO libspdk_event_vfu_tgt.so.3.0 00:02:20.368 SO libspdk_event_sock.so.5.0 00:02:20.368 SO libspdk_event_iobuf.so.3.0 00:02:20.368 SYMLINK libspdk_event_keyring.so 00:02:20.368 SYMLINK libspdk_event_vmd.so 00:02:20.368 SYMLINK libspdk_event_sock.so 00:02:20.368 SYMLINK libspdk_event_scheduler.so 00:02:20.368 SYMLINK libspdk_event_vhost_blk.so 00:02:20.368 SYMLINK libspdk_event_vfu_tgt.so 00:02:20.368 SYMLINK libspdk_event_iobuf.so 00:02:20.941 CC module/event/subsystems/accel/accel.o 00:02:20.941 LIB libspdk_event_accel.a 00:02:20.941 SO libspdk_event_accel.so.6.0 00:02:20.941 SYMLINK libspdk_event_accel.so 00:02:21.513 CC module/event/subsystems/bdev/bdev.o 00:02:21.513 LIB libspdk_event_bdev.a 00:02:21.513 SO libspdk_event_bdev.so.6.0 00:02:21.774 SYMLINK libspdk_event_bdev.so 00:02:22.035 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.035 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.035 CC module/event/subsystems/nbd/nbd.o 00:02:22.035 CC module/event/subsystems/ublk/ublk.o 00:02:22.035 CC module/event/subsystems/scsi/scsi.o 00:02:22.297 LIB libspdk_event_ublk.a 00:02:22.297 LIB libspdk_event_nbd.a 00:02:22.297 LIB libspdk_event_scsi.a 00:02:22.297 SO libspdk_event_ublk.so.3.0 00:02:22.297 SO libspdk_event_nbd.so.6.0 00:02:22.297 LIB libspdk_event_nvmf.a 00:02:22.297 SO libspdk_event_scsi.so.6.0 00:02:22.297 SO libspdk_event_nvmf.so.6.0 00:02:22.297 SYMLINK libspdk_event_ublk.so 00:02:22.297 SYMLINK libspdk_event_nbd.so 00:02:22.297 SYMLINK libspdk_event_scsi.so 00:02:22.297 SYMLINK libspdk_event_nvmf.so 00:02:22.869 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:22.869 CC module/event/subsystems/iscsi/iscsi.o 00:02:22.869 LIB libspdk_event_vhost_scsi.a 00:02:22.869 LIB libspdk_event_iscsi.a 00:02:22.869 SO libspdk_event_vhost_scsi.so.3.0 00:02:22.869 SO libspdk_event_iscsi.so.6.0 00:02:22.869 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.129 SYMLINK libspdk_event_iscsi.so 00:02:23.129 SO libspdk.so.6.0 00:02:23.129 SYMLINK libspdk.so 00:02:23.701 TEST_HEADER include/spdk/accel.h 00:02:23.701 TEST_HEADER include/spdk/accel_module.h 00:02:23.701 TEST_HEADER include/spdk/barrier.h 00:02:23.701 TEST_HEADER include/spdk/base64.h 00:02:23.701 CXX app/trace/trace.o 00:02:23.701 TEST_HEADER include/spdk/assert.h 00:02:23.701 TEST_HEADER include/spdk/bdev.h 00:02:23.701 TEST_HEADER include/spdk/bdev_module.h 00:02:23.701 CC test/rpc_client/rpc_client_test.o 00:02:23.701 TEST_HEADER include/spdk/bdev_zone.h 00:02:23.701 TEST_HEADER include/spdk/bit_array.h 00:02:23.701 TEST_HEADER include/spdk/blob_bdev.h 00:02:23.701 TEST_HEADER include/spdk/bit_pool.h 00:02:23.701 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:23.701 TEST_HEADER include/spdk/blobfs.h 00:02:23.701 TEST_HEADER include/spdk/conf.h 00:02:23.701 TEST_HEADER include/spdk/blob.h 00:02:23.701 CC app/spdk_nvme_perf/perf.o 00:02:23.701 TEST_HEADER include/spdk/config.h 00:02:23.701 TEST_HEADER include/spdk/cpuset.h 00:02:23.701 CC app/trace_record/trace_record.o 00:02:23.701 TEST_HEADER include/spdk/crc16.h 00:02:23.701 TEST_HEADER include/spdk/crc32.h 00:02:23.701 TEST_HEADER include/spdk/crc64.h 00:02:23.701 TEST_HEADER include/spdk/dif.h 00:02:23.701 TEST_HEADER include/spdk/endian.h 00:02:23.701 TEST_HEADER include/spdk/dma.h 00:02:23.701 TEST_HEADER include/spdk/env_dpdk.h 00:02:23.701 TEST_HEADER include/spdk/env.h 00:02:23.701 TEST_HEADER include/spdk/event.h 00:02:23.701 TEST_HEADER include/spdk/fd.h 00:02:23.701 TEST_HEADER include/spdk/fd_group.h 00:02:23.701 TEST_HEADER include/spdk/file.h 00:02:23.701 TEST_HEADER include/spdk/ftl.h 00:02:23.701 CC app/spdk_top/spdk_top.o 00:02:23.701 TEST_HEADER include/spdk/hexlify.h 00:02:23.701 TEST_HEADER include/spdk/histogram_data.h 00:02:23.701 TEST_HEADER include/spdk/gpt_spec.h 00:02:23.701 CC app/spdk_lspci/spdk_lspci.o 00:02:23.701 TEST_HEADER include/spdk/idxd.h 00:02:23.701 TEST_HEADER include/spdk/idxd_spec.h 00:02:23.701 TEST_HEADER include/spdk/init.h 00:02:23.701 TEST_HEADER include/spdk/ioat_spec.h 00:02:23.701 TEST_HEADER include/spdk/ioat.h 00:02:23.701 TEST_HEADER include/spdk/iscsi_spec.h 00:02:23.701 CC app/spdk_nvme_identify/identify.o 00:02:23.701 TEST_HEADER include/spdk/json.h 00:02:23.701 TEST_HEADER include/spdk/jsonrpc.h 00:02:23.701 CC app/spdk_nvme_discover/discovery_aer.o 00:02:23.701 TEST_HEADER include/spdk/likely.h 00:02:23.701 TEST_HEADER include/spdk/keyring.h 00:02:23.701 TEST_HEADER include/spdk/keyring_module.h 00:02:23.701 TEST_HEADER include/spdk/log.h 00:02:23.701 TEST_HEADER include/spdk/lvol.h 00:02:23.701 TEST_HEADER include/spdk/mmio.h 00:02:23.701 TEST_HEADER include/spdk/memory.h 00:02:23.701 TEST_HEADER include/spdk/nbd.h 00:02:23.701 TEST_HEADER include/spdk/nvme.h 00:02:23.701 TEST_HEADER include/spdk/notify.h 00:02:23.701 TEST_HEADER include/spdk/nvme_intel.h 00:02:23.701 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:23.701 TEST_HEADER include/spdk/nvme_zns.h 00:02:23.701 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:23.701 TEST_HEADER include/spdk/nvme_spec.h 00:02:23.701 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:23.701 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:23.701 TEST_HEADER include/spdk/nvmf.h 00:02:23.701 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:23.701 TEST_HEADER include/spdk/nvmf_spec.h 00:02:23.701 TEST_HEADER include/spdk/nvmf_transport.h 00:02:23.701 CC app/nvmf_tgt/nvmf_main.o 00:02:23.701 TEST_HEADER include/spdk/opal.h 00:02:23.701 TEST_HEADER include/spdk/opal_spec.h 00:02:23.701 TEST_HEADER include/spdk/pci_ids.h 00:02:23.701 TEST_HEADER include/spdk/pipe.h 00:02:23.701 TEST_HEADER include/spdk/queue.h 00:02:23.701 TEST_HEADER include/spdk/reduce.h 00:02:23.701 TEST_HEADER include/spdk/rpc.h 00:02:23.701 TEST_HEADER include/spdk/scheduler.h 00:02:23.701 CC app/spdk_dd/spdk_dd.o 00:02:23.701 TEST_HEADER include/spdk/scsi.h 00:02:23.701 TEST_HEADER include/spdk/scsi_spec.h 00:02:23.701 TEST_HEADER include/spdk/sock.h 00:02:23.701 TEST_HEADER include/spdk/stdinc.h 00:02:23.701 TEST_HEADER include/spdk/string.h 00:02:23.701 TEST_HEADER include/spdk/thread.h 00:02:23.701 CC app/iscsi_tgt/iscsi_tgt.o 00:02:23.701 TEST_HEADER include/spdk/trace.h 00:02:23.701 TEST_HEADER include/spdk/trace_parser.h 00:02:23.701 TEST_HEADER include/spdk/tree.h 00:02:23.701 TEST_HEADER include/spdk/ublk.h 00:02:23.701 TEST_HEADER include/spdk/uuid.h 00:02:23.701 TEST_HEADER include/spdk/util.h 00:02:23.701 TEST_HEADER include/spdk/version.h 00:02:23.701 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:23.701 TEST_HEADER include/spdk/vhost.h 00:02:23.701 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:23.701 TEST_HEADER include/spdk/vmd.h 00:02:23.701 TEST_HEADER include/spdk/zipf.h 00:02:23.701 TEST_HEADER include/spdk/xor.h 00:02:23.701 CXX test/cpp_headers/accel.o 00:02:23.701 CXX test/cpp_headers/accel_module.o 00:02:23.701 CXX test/cpp_headers/assert.o 00:02:23.701 CC app/spdk_tgt/spdk_tgt.o 00:02:23.701 CXX test/cpp_headers/barrier.o 00:02:23.701 CXX test/cpp_headers/base64.o 00:02:23.701 CXX test/cpp_headers/bdev.o 00:02:23.701 CXX test/cpp_headers/bdev_module.o 00:02:23.701 CXX test/cpp_headers/bdev_zone.o 00:02:23.701 CXX test/cpp_headers/bit_array.o 00:02:23.701 CXX test/cpp_headers/blob_bdev.o 00:02:23.701 CXX test/cpp_headers/bit_pool.o 00:02:23.701 CXX test/cpp_headers/blobfs_bdev.o 00:02:23.701 CXX test/cpp_headers/blob.o 00:02:23.701 CXX test/cpp_headers/blobfs.o 00:02:23.701 CXX test/cpp_headers/conf.o 00:02:23.701 CXX test/cpp_headers/config.o 00:02:23.701 CXX test/cpp_headers/crc16.o 00:02:23.701 CXX test/cpp_headers/cpuset.o 00:02:23.701 CXX test/cpp_headers/crc32.o 00:02:23.701 CXX test/cpp_headers/crc64.o 00:02:23.701 CXX test/cpp_headers/dma.o 00:02:23.701 CXX test/cpp_headers/dif.o 00:02:23.701 CXX test/cpp_headers/endian.o 00:02:23.701 CXX test/cpp_headers/env_dpdk.o 00:02:23.701 CXX test/cpp_headers/env.o 00:02:23.701 CXX test/cpp_headers/event.o 00:02:23.701 CXX test/cpp_headers/fd_group.o 00:02:23.701 CXX test/cpp_headers/ftl.o 00:02:23.701 CXX test/cpp_headers/fd.o 00:02:23.701 CXX test/cpp_headers/file.o 00:02:23.701 CXX test/cpp_headers/hexlify.o 00:02:23.701 CXX test/cpp_headers/gpt_spec.o 00:02:23.701 CXX test/cpp_headers/histogram_data.o 00:02:23.701 CXX test/cpp_headers/idxd_spec.o 00:02:23.701 CXX test/cpp_headers/idxd.o 00:02:23.701 CXX test/cpp_headers/init.o 00:02:23.701 CXX test/cpp_headers/ioat.o 00:02:23.701 CXX test/cpp_headers/ioat_spec.o 00:02:23.701 CXX test/cpp_headers/json.o 00:02:23.701 CXX test/cpp_headers/iscsi_spec.o 00:02:23.701 CXX test/cpp_headers/jsonrpc.o 00:02:23.701 CXX test/cpp_headers/keyring.o 00:02:23.701 CXX test/cpp_headers/keyring_module.o 00:02:23.701 CXX test/cpp_headers/likely.o 00:02:23.701 CXX test/cpp_headers/memory.o 00:02:23.701 CXX test/cpp_headers/lvol.o 00:02:23.701 CXX test/cpp_headers/mmio.o 00:02:23.701 CXX test/cpp_headers/log.o 00:02:23.701 CXX test/cpp_headers/notify.o 00:02:23.701 CXX test/cpp_headers/nbd.o 00:02:23.701 CXX test/cpp_headers/nvme_ocssd.o 00:02:23.701 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:23.701 CXX test/cpp_headers/nvme.o 00:02:23.701 CXX test/cpp_headers/nvme_intel.o 00:02:23.701 CXX test/cpp_headers/nvme_zns.o 00:02:23.701 CXX test/cpp_headers/nvme_spec.o 00:02:23.701 CXX test/cpp_headers/nvmf_spec.o 00:02:23.701 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:23.701 CXX test/cpp_headers/nvmf_cmd.o 00:02:23.701 CXX test/cpp_headers/nvmf.o 00:02:23.701 CXX test/cpp_headers/opal.o 00:02:23.701 CXX test/cpp_headers/pci_ids.o 00:02:23.701 CXX test/cpp_headers/opal_spec.o 00:02:23.701 CXX test/cpp_headers/nvmf_transport.o 00:02:23.701 CXX test/cpp_headers/pipe.o 00:02:23.701 CXX test/cpp_headers/rpc.o 00:02:23.701 CXX test/cpp_headers/queue.o 00:02:23.701 CXX test/cpp_headers/reduce.o 00:02:23.701 CXX test/cpp_headers/scsi_spec.o 00:02:23.701 CXX test/cpp_headers/scheduler.o 00:02:23.701 CXX test/cpp_headers/scsi.o 00:02:23.701 CXX test/cpp_headers/sock.o 00:02:23.701 CXX test/cpp_headers/stdinc.o 00:02:23.701 CXX test/cpp_headers/string.o 00:02:23.701 CXX test/cpp_headers/thread.o 00:02:23.701 CXX test/cpp_headers/trace.o 00:02:23.701 CXX test/cpp_headers/trace_parser.o 00:02:23.701 CXX test/cpp_headers/tree.o 00:02:23.701 CXX test/cpp_headers/util.o 00:02:23.701 CXX test/cpp_headers/ublk.o 00:02:23.701 CXX test/cpp_headers/uuid.o 00:02:23.701 CXX test/cpp_headers/version.o 00:02:23.701 CXX test/cpp_headers/vmd.o 00:02:23.701 CXX test/cpp_headers/vfio_user_pci.o 00:02:23.701 CXX test/cpp_headers/vfio_user_spec.o 00:02:23.701 CXX test/cpp_headers/xor.o 00:02:23.701 CXX test/cpp_headers/vhost.o 00:02:23.701 CXX test/cpp_headers/zipf.o 00:02:23.701 CC test/thread/poller_perf/poller_perf.o 00:02:23.701 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:23.701 CC test/env/memory/memory_ut.o 00:02:23.701 CC test/env/pci/pci_ut.o 00:02:23.701 CC test/app/histogram_perf/histogram_perf.o 00:02:23.701 CC examples/util/zipf/zipf.o 00:02:23.702 CC test/env/vtophys/vtophys.o 00:02:23.702 CC examples/ioat/verify/verify.o 00:02:23.702 CC examples/ioat/perf/perf.o 00:02:23.702 CC test/app/stub/stub.o 00:02:23.963 LINK spdk_lspci 00:02:23.963 CC test/app/jsoncat/jsoncat.o 00:02:23.963 CC test/app/bdev_svc/bdev_svc.o 00:02:23.963 CC test/dma/test_dma/test_dma.o 00:02:23.963 LINK rpc_client_test 00:02:23.963 CC app/fio/nvme/fio_plugin.o 00:02:23.963 CC app/fio/bdev/fio_plugin.o 00:02:23.963 LINK spdk_nvme_discover 00:02:23.963 LINK spdk_trace_record 00:02:24.222 LINK interrupt_tgt 00:02:24.222 LINK nvmf_tgt 00:02:24.222 LINK iscsi_tgt 00:02:24.222 CC test/env/mem_callbacks/mem_callbacks.o 00:02:24.222 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.222 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.222 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.222 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.222 LINK spdk_tgt 00:02:24.222 LINK jsoncat 00:02:24.222 LINK histogram_perf 00:02:24.222 LINK ioat_perf 00:02:24.222 LINK spdk_trace 00:02:24.482 LINK poller_perf 00:02:24.482 LINK zipf 00:02:24.482 LINK vtophys 00:02:24.482 LINK env_dpdk_post_init 00:02:24.482 LINK stub 00:02:24.482 LINK bdev_svc 00:02:24.482 LINK verify 00:02:24.482 LINK spdk_dd 00:02:24.742 LINK pci_ut 00:02:24.742 LINK test_dma 00:02:24.742 LINK nvme_fuzz 00:02:24.742 LINK vhost_fuzz 00:02:24.742 LINK spdk_bdev 00:02:24.742 CC app/vhost/vhost.o 00:02:24.742 LINK spdk_nvme_identify 00:02:24.742 LINK spdk_top 00:02:24.742 LINK spdk_nvme 00:02:24.742 LINK spdk_nvme_perf 00:02:24.742 CC examples/sock/hello_world/hello_sock.o 00:02:24.742 CC examples/vmd/lsvmd/lsvmd.o 00:02:24.742 CC examples/vmd/led/led.o 00:02:24.742 CC test/event/event_perf/event_perf.o 00:02:24.742 LINK mem_callbacks 00:02:24.742 CC examples/idxd/perf/perf.o 00:02:24.742 CC test/event/reactor/reactor.o 00:02:24.742 CC test/event/reactor_perf/reactor_perf.o 00:02:24.742 CC test/event/app_repeat/app_repeat.o 00:02:25.002 CC examples/thread/thread/thread_ex.o 00:02:25.002 CC test/event/scheduler/scheduler.o 00:02:25.002 LINK vhost 00:02:25.002 LINK lsvmd 00:02:25.002 LINK reactor_perf 00:02:25.002 LINK led 00:02:25.002 LINK event_perf 00:02:25.002 LINK reactor 00:02:25.002 LINK app_repeat 00:02:25.002 LINK hello_sock 00:02:25.262 LINK idxd_perf 00:02:25.262 LINK scheduler 00:02:25.262 LINK thread 00:02:25.262 CC test/nvme/cuse/cuse.o 00:02:25.262 CC test/nvme/aer/aer.o 00:02:25.262 CC test/nvme/err_injection/err_injection.o 00:02:25.262 CC test/blobfs/mkfs/mkfs.o 00:02:25.262 CC test/nvme/connect_stress/connect_stress.o 00:02:25.262 CC test/nvme/reset/reset.o 00:02:25.262 CC test/nvme/e2edp/nvme_dp.o 00:02:25.262 CC test/nvme/overhead/overhead.o 00:02:25.262 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.262 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:25.262 CC test/nvme/startup/startup.o 00:02:25.262 CC test/nvme/simple_copy/simple_copy.o 00:02:25.262 CC test/nvme/compliance/nvme_compliance.o 00:02:25.262 CC test/nvme/sgl/sgl.o 00:02:25.262 CC test/nvme/reserve/reserve.o 00:02:25.262 CC test/nvme/fdp/fdp.o 00:02:25.262 CC test/nvme/boot_partition/boot_partition.o 00:02:25.262 LINK memory_ut 00:02:25.262 CC test/accel/dif/dif.o 00:02:25.524 CC test/lvol/esnap/esnap.o 00:02:25.524 LINK boot_partition 00:02:25.524 LINK connect_stress 00:02:25.524 LINK startup 00:02:25.524 LINK fused_ordering 00:02:25.524 LINK err_injection 00:02:25.524 LINK doorbell_aers 00:02:25.524 LINK mkfs 00:02:25.524 LINK reserve 00:02:25.524 LINK sgl 00:02:25.524 LINK simple_copy 00:02:25.524 LINK nvme_dp 00:02:25.524 LINK reset 00:02:25.524 CC examples/nvme/hello_world/hello_world.o 00:02:25.524 CC examples/nvme/hotplug/hotplug.o 00:02:25.524 LINK aer 00:02:25.524 LINK overhead 00:02:25.524 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.524 CC examples/nvme/reconnect/reconnect.o 00:02:25.524 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.524 CC examples/nvme/arbitration/arbitration.o 00:02:25.524 CC examples/nvme/abort/abort.o 00:02:25.524 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.524 LINK nvme_compliance 00:02:25.524 LINK fdp 00:02:25.785 CC examples/accel/perf/accel_perf.o 00:02:25.785 LINK dif 00:02:25.785 CC examples/blob/cli/blobcli.o 00:02:25.785 LINK pmr_persistence 00:02:25.785 LINK cmb_copy 00:02:25.785 LINK hotplug 00:02:25.785 CC examples/blob/hello_world/hello_blob.o 00:02:25.785 LINK iscsi_fuzz 00:02:25.785 LINK hello_world 00:02:25.785 LINK arbitration 00:02:25.785 LINK reconnect 00:02:25.785 LINK abort 00:02:26.046 LINK nvme_manage 00:02:26.046 LINK hello_blob 00:02:26.046 LINK accel_perf 00:02:26.307 LINK blobcli 00:02:26.307 CC test/bdev/bdevio/bdevio.o 00:02:26.307 LINK cuse 00:02:26.568 CC examples/bdev/hello_world/hello_bdev.o 00:02:26.568 CC examples/bdev/bdevperf/bdevperf.o 00:02:26.568 LINK bdevio 00:02:26.828 LINK hello_bdev 00:02:27.400 LINK bdevperf 00:02:28.034 CC examples/nvmf/nvmf/nvmf.o 00:02:28.294 LINK nvmf 00:02:29.676 LINK esnap 00:02:29.935 00:02:29.935 real 0m51.166s 00:02:29.935 user 6m33.576s 00:02:29.935 sys 4m10.010s 00:02:29.935 23:38:44 make -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:02:29.935 23:38:44 make -- common/autotest_common.sh@10 -- $ set +x 00:02:29.935 ************************************ 00:02:29.935 END TEST make 00:02:29.935 ************************************ 00:02:29.935 23:38:45 -- common/autotest_common.sh@1136 -- $ return 0 00:02:29.935 23:38:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:29.935 23:38:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:29.935 23:38:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:29.935 23:38:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.935 23:38:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:29.935 23:38:45 -- pm/common@44 -- $ pid=105816 00:02:29.936 23:38:45 -- pm/common@50 -- $ kill -TERM 105816 00:02:29.936 23:38:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.936 23:38:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:29.936 23:38:45 -- pm/common@44 -- $ pid=105817 00:02:29.936 23:38:45 -- pm/common@50 -- $ kill -TERM 105817 00:02:29.936 23:38:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.936 23:38:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:29.936 23:38:45 -- pm/common@44 -- $ pid=105819 00:02:29.936 23:38:45 -- pm/common@50 -- $ kill -TERM 105819 00:02:29.936 23:38:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.936 23:38:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:29.936 23:38:45 -- pm/common@44 -- $ pid=105844 00:02:29.936 23:38:45 -- pm/common@50 -- $ sudo -E kill -TERM 105844 00:02:30.218 23:38:45 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:30.218 23:38:45 -- nvmf/common.sh@7 -- # uname -s 00:02:30.218 23:38:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:30.218 23:38:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:30.218 23:38:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:30.218 23:38:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:30.218 23:38:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:30.218 23:38:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:30.218 23:38:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:30.218 23:38:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:30.218 23:38:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:30.218 23:38:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:30.218 23:38:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:30.218 23:38:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:30.218 23:38:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:30.218 23:38:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:30.218 23:38:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:30.218 23:38:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:30.218 23:38:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:30.218 23:38:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:30.218 23:38:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.218 23:38:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.218 23:38:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.218 23:38:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.218 23:38:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.218 23:38:45 -- paths/export.sh@5 -- # export PATH 00:02:30.218 23:38:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.218 23:38:45 -- nvmf/common.sh@47 -- # : 0 00:02:30.218 23:38:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:30.218 23:38:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:30.218 23:38:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:30.218 23:38:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:30.218 23:38:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:30.218 23:38:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:30.218 23:38:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:30.218 23:38:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:30.218 23:38:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:30.218 23:38:45 -- spdk/autotest.sh@32 -- # uname -s 00:02:30.218 23:38:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:30.218 23:38:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:30.218 23:38:45 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.218 23:38:45 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:30.218 23:38:45 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.218 23:38:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:30.218 23:38:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:30.218 23:38:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:30.218 23:38:45 -- spdk/autotest.sh@48 -- # udevadm_pid=168941 00:02:30.218 23:38:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:30.218 23:38:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:30.218 23:38:45 -- pm/common@17 -- # local monitor 00:02:30.218 23:38:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.218 23:38:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.218 23:38:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.219 23:38:45 -- pm/common@21 -- # date +%s 00:02:30.219 23:38:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.219 23:38:45 -- pm/common@21 -- # date +%s 00:02:30.219 23:38:45 -- pm/common@25 -- # sleep 1 00:02:30.219 23:38:45 -- pm/common@21 -- # date +%s 00:02:30.219 23:38:45 -- pm/common@21 -- # date +%s 00:02:30.219 23:38:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721079525 00:02:30.219 23:38:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721079525 00:02:30.219 23:38:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721079525 00:02:30.219 23:38:45 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721079525 00:02:30.219 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721079525_collect-vmstat.pm.log 00:02:30.219 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721079525_collect-cpu-load.pm.log 00:02:30.219 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721079525_collect-cpu-temp.pm.log 00:02:30.219 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721079525_collect-bmc-pm.bmc.pm.log 00:02:31.160 23:38:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:31.160 23:38:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:31.160 23:38:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:02:31.160 23:38:46 -- common/autotest_common.sh@10 -- # set +x 00:02:31.160 23:38:46 -- spdk/autotest.sh@59 -- # create_test_list 00:02:31.160 23:38:46 -- common/autotest_common.sh@740 -- # xtrace_disable 00:02:31.160 23:38:46 -- common/autotest_common.sh@10 -- # set +x 00:02:31.160 23:38:46 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:31.160 23:38:46 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.160 23:38:46 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.160 23:38:46 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:31.160 23:38:46 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.160 23:38:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:31.160 23:38:46 -- common/autotest_common.sh@1449 -- # uname 00:02:31.160 23:38:46 -- common/autotest_common.sh@1449 -- # '[' Linux = FreeBSD ']' 00:02:31.160 23:38:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:31.160 23:38:46 -- common/autotest_common.sh@1469 -- # uname 00:02:31.160 23:38:46 -- common/autotest_common.sh@1469 -- # [[ Linux = FreeBSD ]] 00:02:31.160 23:38:46 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:31.160 23:38:46 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:31.160 23:38:46 -- spdk/autotest.sh@72 -- # hash lcov 00:02:31.160 23:38:46 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:31.160 23:38:46 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:31.160 --rc lcov_branch_coverage=1 00:02:31.160 --rc lcov_function_coverage=1 00:02:31.160 --rc genhtml_branch_coverage=1 00:02:31.160 --rc genhtml_function_coverage=1 00:02:31.160 --rc genhtml_legend=1 00:02:31.160 --rc geninfo_all_blocks=1 00:02:31.160 ' 00:02:31.160 23:38:46 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:31.160 --rc lcov_branch_coverage=1 00:02:31.160 --rc lcov_function_coverage=1 00:02:31.160 --rc genhtml_branch_coverage=1 00:02:31.160 --rc genhtml_function_coverage=1 00:02:31.160 --rc genhtml_legend=1 00:02:31.160 --rc geninfo_all_blocks=1 00:02:31.160 ' 00:02:31.160 23:38:46 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:31.160 --rc lcov_branch_coverage=1 00:02:31.160 --rc lcov_function_coverage=1 00:02:31.160 --rc genhtml_branch_coverage=1 00:02:31.160 --rc genhtml_function_coverage=1 00:02:31.160 --rc genhtml_legend=1 00:02:31.160 --rc geninfo_all_blocks=1 00:02:31.160 --no-external' 00:02:31.160 23:38:46 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:31.160 --rc lcov_branch_coverage=1 00:02:31.160 --rc lcov_function_coverage=1 00:02:31.160 --rc genhtml_branch_coverage=1 00:02:31.160 --rc genhtml_function_coverage=1 00:02:31.160 --rc genhtml_legend=1 00:02:31.160 --rc geninfo_all_blocks=1 00:02:31.160 --no-external' 00:02:31.160 23:38:46 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:31.421 lcov: LCOV version 1.14 00:02:31.421 23:38:46 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:32.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:32.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:32.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:32.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:33.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:33.068 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:33.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:33.069 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:33.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:33.331 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:33.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:33.592 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:45.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:45.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:00.741 23:39:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:00.741 23:39:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:00.741 23:39:15 -- common/autotest_common.sh@10 -- # set +x 00:03:00.741 23:39:15 -- spdk/autotest.sh@91 -- # rm -f 00:03:00.741 23:39:15 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.944 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:04.944 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:04.944 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:04.944 23:39:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:04.944 23:39:19 -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:03:04.944 23:39:19 -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:03:04.944 23:39:19 -- common/autotest_common.sh@1664 -- # local nvme bdf 00:03:04.944 23:39:19 -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:03:04.944 23:39:19 -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:03:04.944 23:39:19 -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:03:04.944 23:39:19 -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.944 23:39:19 -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:03:04.944 23:39:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:04.944 23:39:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:04.944 23:39:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:04.944 23:39:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:04.944 23:39:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:04.944 23:39:19 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:04.944 No valid GPT data, bailing 00:03:04.944 23:39:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:04.944 23:39:19 -- scripts/common.sh@391 -- # pt= 00:03:04.944 23:39:19 -- scripts/common.sh@392 -- # return 1 00:03:04.944 23:39:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:04.944 1+0 records in 00:03:04.944 1+0 records out 00:03:04.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00177621 s, 590 MB/s 00:03:04.944 23:39:19 -- spdk/autotest.sh@118 -- # sync 00:03:04.944 23:39:19 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:04.944 23:39:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:04.944 23:39:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:13.090 23:39:27 -- spdk/autotest.sh@124 -- # uname -s 00:03:13.090 23:39:27 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:13.090 23:39:27 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.090 23:39:27 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:13.090 23:39:27 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:13.090 23:39:27 -- common/autotest_common.sh@10 -- # set +x 00:03:13.090 ************************************ 00:03:13.090 START TEST setup.sh 00:03:13.090 ************************************ 00:03:13.090 23:39:27 setup.sh -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:13.090 * Looking for test storage... 00:03:13.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.090 23:39:27 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:13.090 23:39:27 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:13.090 23:39:27 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:13.090 23:39:27 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:13.090 23:39:27 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:13.090 23:39:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:13.090 ************************************ 00:03:13.090 START TEST acl 00:03:13.090 ************************************ 00:03:13.090 23:39:27 setup.sh.acl -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:13.090 * Looking for test storage... 00:03:13.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.090 23:39:28 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:13.090 23:39:28 setup.sh.acl -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:03:13.090 23:39:28 setup.sh.acl -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:03:13.090 23:39:28 setup.sh.acl -- common/autotest_common.sh@1664 -- # local nvme bdf 00:03:13.090 23:39:28 setup.sh.acl -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:03:13.090 23:39:28 setup.sh.acl -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:03:13.090 23:39:28 setup.sh.acl -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:03:13.090 23:39:28 setup.sh.acl -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:13.090 23:39:28 setup.sh.acl -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:03:13.090 23:39:28 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:13.090 23:39:28 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:13.090 23:39:28 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:13.090 23:39:28 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:13.090 23:39:28 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:13.090 23:39:28 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.090 23:39:28 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.297 23:39:32 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:17.297 23:39:32 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:17.297 23:39:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.297 23:39:32 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:17.297 23:39:32 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.297 23:39:32 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:21.505 Hugepages 00:03:21.505 node hugesize free / total 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 00:03:21.505 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:21.505 23:39:35 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:21.505 23:39:36 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:21.505 23:39:36 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:21.505 23:39:36 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:21.505 23:39:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:21.505 ************************************ 00:03:21.505 START TEST denied 00:03:21.505 ************************************ 00:03:21.505 23:39:36 setup.sh.acl.denied -- common/autotest_common.sh@1117 -- # denied 00:03:21.505 23:39:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:21.505 23:39:36 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:21.505 23:39:36 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:21.505 23:39:36 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.505 23:39:36 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:25.716 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:25.716 23:39:40 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:25.716 23:39:40 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:25.716 23:39:40 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:25.717 23:39:40 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:25.717 23:39:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:25.717 23:39:40 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:25.717 23:39:40 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:25.717 23:39:40 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:25.717 23:39:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.717 23:39:40 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.932 00:03:29.933 real 0m8.877s 00:03:29.933 user 0m3.013s 00:03:29.933 sys 0m5.210s 00:03:29.933 23:39:45 setup.sh.acl.denied -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:29.933 23:39:45 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:29.933 ************************************ 00:03:29.933 END TEST denied 00:03:29.933 ************************************ 00:03:30.193 23:39:45 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:03:30.193 23:39:45 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:30.193 23:39:45 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:30.193 23:39:45 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:30.193 23:39:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:30.193 ************************************ 00:03:30.193 START TEST allowed 00:03:30.193 ************************************ 00:03:30.193 23:39:45 setup.sh.acl.allowed -- common/autotest_common.sh@1117 -- # allowed 00:03:30.193 23:39:45 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:30.193 23:39:45 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:30.193 23:39:45 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:30.193 23:39:45 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.193 23:39:45 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.856 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.856 23:39:50 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:36.856 23:39:50 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:36.856 23:39:50 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:36.856 23:39:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.856 23:39:50 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.158 00:03:40.158 real 0m9.898s 00:03:40.158 user 0m3.073s 00:03:40.158 sys 0m5.154s 00:03:40.158 23:39:55 setup.sh.acl.allowed -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:40.158 23:39:55 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:40.158 ************************************ 00:03:40.158 END TEST allowed 00:03:40.158 ************************************ 00:03:40.158 23:39:55 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:03:40.158 00:03:40.158 real 0m27.156s 00:03:40.158 user 0m9.188s 00:03:40.158 sys 0m15.864s 00:03:40.158 23:39:55 setup.sh.acl -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:40.158 23:39:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:40.158 ************************************ 00:03:40.158 END TEST acl 00:03:40.158 ************************************ 00:03:40.158 23:39:55 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:03:40.158 23:39:55 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.158 23:39:55 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:40.158 23:39:55 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:40.158 23:39:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.158 ************************************ 00:03:40.158 START TEST hugepages 00:03:40.158 ************************************ 00:03:40.158 23:39:55 setup.sh.hugepages -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.158 * Looking for test storage... 00:03:40.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106403112 kB' 'MemAvailable: 110144188 kB' 'Buffers: 4132 kB' 'Cached: 10669036 kB' 'SwapCached: 0 kB' 'Active: 7613580 kB' 'Inactive: 3701320 kB' 'Active(anon): 7122148 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645032 kB' 'Mapped: 202676 kB' 'Shmem: 6480416 kB' 'KReclaimable: 589268 kB' 'Slab: 1471856 kB' 'SReclaimable: 589268 kB' 'SUnreclaim: 882588 kB' 'KernelStack: 27712 kB' 'PageTables: 9448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8733684 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238144 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.158 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.159 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:40.160 23:39:55 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:40.160 23:39:55 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:40.160 23:39:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:40.160 23:39:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.421 ************************************ 00:03:40.421 START TEST single_node_setup 00:03:40.421 ************************************ 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1117 -- # single_node_setup 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:40.421 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.422 23:39:55 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.635 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:44.635 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:44.635 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:44.635 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:44.635 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:44.635 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:44.635 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:44.635 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:44.635 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:44.635 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.635 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108452772 kB' 'MemAvailable: 112193816 kB' 'Buffers: 4132 kB' 'Cached: 10669156 kB' 'SwapCached: 0 kB' 'Active: 7631472 kB' 'Inactive: 3701320 kB' 'Active(anon): 7140040 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662440 kB' 'Mapped: 202980 kB' 'Shmem: 6480536 kB' 'KReclaimable: 589236 kB' 'Slab: 1469108 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 879872 kB' 'KernelStack: 27776 kB' 'PageTables: 9520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8755600 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238256 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.636 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108452444 kB' 'MemAvailable: 112193488 kB' 'Buffers: 4132 kB' 'Cached: 10669160 kB' 'SwapCached: 0 kB' 'Active: 7632284 kB' 'Inactive: 3701320 kB' 'Active(anon): 7140852 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663260 kB' 'Mapped: 203056 kB' 'Shmem: 6480540 kB' 'KReclaimable: 589236 kB' 'Slab: 1469156 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 879920 kB' 'KernelStack: 27744 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8755620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238224 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.637 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.638 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108453976 kB' 'MemAvailable: 112195020 kB' 'Buffers: 4132 kB' 'Cached: 10669180 kB' 'SwapCached: 0 kB' 'Active: 7631704 kB' 'Inactive: 3701320 kB' 'Active(anon): 7140272 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663188 kB' 'Mapped: 202960 kB' 'Shmem: 6480560 kB' 'KReclaimable: 589236 kB' 'Slab: 1469156 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 879920 kB' 'KernelStack: 27728 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8755644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238256 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.639 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.640 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:44.641 nr_hugepages=1024 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:44.641 resv_hugepages=0 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:44.641 surplus_hugepages=0 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:44.641 anon_hugepages=0 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108454924 kB' 'MemAvailable: 112195968 kB' 'Buffers: 4132 kB' 'Cached: 10669200 kB' 'SwapCached: 0 kB' 'Active: 7631488 kB' 'Inactive: 3701320 kB' 'Active(anon): 7140056 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662852 kB' 'Mapped: 202960 kB' 'Shmem: 6480580 kB' 'KReclaimable: 589236 kB' 'Slab: 1469156 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 879920 kB' 'KernelStack: 27728 kB' 'PageTables: 9364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8755664 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238272 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.641 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.642 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58975612 kB' 'MemUsed: 6683396 kB' 'SwapCached: 0 kB' 'Active: 1899592 kB' 'Inactive: 285896 kB' 'Active(anon): 1741844 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2079480 kB' 'Mapped: 37104 kB' 'AnonPages: 109336 kB' 'Shmem: 1635836 kB' 'KernelStack: 14328 kB' 'PageTables: 3476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 337404 kB' 'Slab: 757116 kB' 'SReclaimable: 337404 kB' 'SUnreclaim: 419712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.643 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:44.644 node0=1024 expecting 1024 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.644 00:03:44.644 real 0m4.015s 00:03:44.644 user 0m1.452s 00:03:44.644 sys 0m2.517s 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:44.644 23:39:59 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:44.644 ************************************ 00:03:44.644 END TEST single_node_setup 00:03:44.644 ************************************ 00:03:44.644 23:39:59 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:03:44.644 23:39:59 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:44.644 23:39:59 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:44.644 23:39:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:44.644 23:39:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.644 ************************************ 00:03:44.644 START TEST even_2G_alloc 00:03:44.644 ************************************ 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1117 -- # even_2G_alloc 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.644 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:44.645 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:44.645 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:44.645 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.645 23:39:59 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.884 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:48.884 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108491384 kB' 'MemAvailable: 112232428 kB' 'Buffers: 4132 kB' 'Cached: 10669336 kB' 'SwapCached: 0 kB' 'Active: 7631160 kB' 'Inactive: 3701320 kB' 'Active(anon): 7139728 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662280 kB' 'Mapped: 201700 kB' 'Shmem: 6480716 kB' 'KReclaimable: 589236 kB' 'Slab: 1469696 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 880460 kB' 'KernelStack: 27824 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8742092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238320 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.885 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108489440 kB' 'MemAvailable: 112230484 kB' 'Buffers: 4132 kB' 'Cached: 10669340 kB' 'SwapCached: 0 kB' 'Active: 7631240 kB' 'Inactive: 3701320 kB' 'Active(anon): 7139808 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662380 kB' 'Mapped: 201700 kB' 'Shmem: 6480720 kB' 'KReclaimable: 589236 kB' 'Slab: 1469732 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 880496 kB' 'KernelStack: 27856 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8742112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238320 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:48.887 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108488892 kB' 'MemAvailable: 112229936 kB' 'Buffers: 4132 kB' 'Cached: 10669356 kB' 'SwapCached: 0 kB' 'Active: 7631300 kB' 'Inactive: 3701320 kB' 'Active(anon): 7139868 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662448 kB' 'Mapped: 201700 kB' 'Shmem: 6480736 kB' 'KReclaimable: 589236 kB' 'Slab: 1469732 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 880496 kB' 'KernelStack: 27936 kB' 'PageTables: 9500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8742132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238416 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:48.889 nr_hugepages=1024 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:48.889 resv_hugepages=0 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:48.889 surplus_hugepages=0 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:48.889 anon_hugepages=0 00:03:48.889 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108487756 kB' 'MemAvailable: 112228800 kB' 'Buffers: 4132 kB' 'Cached: 10669380 kB' 'SwapCached: 0 kB' 'Active: 7631208 kB' 'Inactive: 3701320 kB' 'Active(anon): 7139776 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662328 kB' 'Mapped: 201700 kB' 'Shmem: 6480760 kB' 'KReclaimable: 589236 kB' 'Slab: 1469732 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 880496 kB' 'KernelStack: 27808 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8740536 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238352 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.890 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60042660 kB' 'MemUsed: 5616348 kB' 'SwapCached: 0 kB' 'Active: 1897560 kB' 'Inactive: 285896 kB' 'Active(anon): 1739812 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2079532 kB' 'Mapped: 36324 kB' 'AnonPages: 106996 kB' 'Shmem: 1635888 kB' 'KernelStack: 14488 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 337404 kB' 'Slab: 757292 kB' 'SReclaimable: 337404 kB' 'SUnreclaim: 419888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.891 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.892 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48445244 kB' 'MemUsed: 12234596 kB' 'SwapCached: 0 kB' 'Active: 5733492 kB' 'Inactive: 3415424 kB' 'Active(anon): 5399808 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415424 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8594016 kB' 'Mapped: 165376 kB' 'AnonPages: 555108 kB' 'Shmem: 4844908 kB' 'KernelStack: 13416 kB' 'PageTables: 5996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251832 kB' 'Slab: 712440 kB' 'SReclaimable: 251832 kB' 'SUnreclaim: 460608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.893 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:48.894 node0=512 expecting 512 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:48.894 node1=512 expecting 512 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:48.894 00:03:48.894 real 0m4.064s 00:03:48.894 user 0m1.612s 00:03:48.894 sys 0m2.510s 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:48.894 23:40:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.894 ************************************ 00:03:48.894 END TEST even_2G_alloc 00:03:48.894 ************************************ 00:03:48.894 23:40:03 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:03:48.894 23:40:03 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:48.894 23:40:03 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:48.894 23:40:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:48.894 23:40:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.894 ************************************ 00:03:48.894 START TEST odd_alloc 00:03:48.894 ************************************ 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1117 -- # odd_alloc 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.894 23:40:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.201 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:52.201 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108522132 kB' 'MemAvailable: 112263176 kB' 'Buffers: 4132 kB' 'Cached: 10669512 kB' 'SwapCached: 0 kB' 'Active: 7631408 kB' 'Inactive: 3701320 kB' 'Active(anon): 7139976 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661928 kB' 'Mapped: 201880 kB' 'Shmem: 6480892 kB' 'KReclaimable: 589236 kB' 'Slab: 1470096 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 880860 kB' 'KernelStack: 27696 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8740332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238256 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.201 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.465 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108522376 kB' 'MemAvailable: 112263420 kB' 'Buffers: 4132 kB' 'Cached: 10669516 kB' 'SwapCached: 0 kB' 'Active: 7631572 kB' 'Inactive: 3701320 kB' 'Active(anon): 7140140 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662128 kB' 'Mapped: 201844 kB' 'Shmem: 6480896 kB' 'KReclaimable: 589236 kB' 'Slab: 1470096 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 880860 kB' 'KernelStack: 27680 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8740348 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238240 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.466 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108523412 kB' 'MemAvailable: 112264456 kB' 'Buffers: 4132 kB' 'Cached: 10669532 kB' 'SwapCached: 0 kB' 'Active: 7630924 kB' 'Inactive: 3701320 kB' 'Active(anon): 7139492 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661908 kB' 'Mapped: 201764 kB' 'Shmem: 6480912 kB' 'KReclaimable: 589236 kB' 'Slab: 1470084 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 880848 kB' 'KernelStack: 27664 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8740372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238240 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.467 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.468 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:52.469 nr_hugepages=1025 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:52.469 resv_hugepages=0 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:52.469 surplus_hugepages=0 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:52.469 anon_hugepages=0 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108524096 kB' 'MemAvailable: 112265140 kB' 'Buffers: 4132 kB' 'Cached: 10669568 kB' 'SwapCached: 0 kB' 'Active: 7630548 kB' 'Inactive: 3701320 kB' 'Active(anon): 7139116 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661488 kB' 'Mapped: 201764 kB' 'Shmem: 6480948 kB' 'KReclaimable: 589236 kB' 'Slab: 1470084 kB' 'SReclaimable: 589236 kB' 'SUnreclaim: 880848 kB' 'KernelStack: 27648 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8740392 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238240 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.469 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60062908 kB' 'MemUsed: 5596100 kB' 'SwapCached: 0 kB' 'Active: 1897264 kB' 'Inactive: 285896 kB' 'Active(anon): 1739516 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2079552 kB' 'Mapped: 36324 kB' 'AnonPages: 106772 kB' 'Shmem: 1635908 kB' 'KernelStack: 14248 kB' 'PageTables: 3120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 337404 kB' 'Slab: 757636 kB' 'SReclaimable: 337404 kB' 'SUnreclaim: 420232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.470 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48461828 kB' 'MemUsed: 12218012 kB' 'SwapCached: 0 kB' 'Active: 5733656 kB' 'Inactive: 3415424 kB' 'Active(anon): 5399972 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415424 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8594152 kB' 'Mapped: 165440 kB' 'AnonPages: 555140 kB' 'Shmem: 4845044 kB' 'KernelStack: 13416 kB' 'PageTables: 5952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251832 kB' 'Slab: 712448 kB' 'SReclaimable: 251832 kB' 'SUnreclaim: 460616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.471 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:52.472 node0=513 expecting 513 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:52.472 node1=512 expecting 512 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:52.472 00:03:52.472 real 0m3.962s 00:03:52.472 user 0m1.528s 00:03:52.472 sys 0m2.500s 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:52.472 23:40:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.472 ************************************ 00:03:52.472 END TEST odd_alloc 00:03:52.472 ************************************ 00:03:52.472 23:40:07 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:03:52.472 23:40:07 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:52.472 23:40:07 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:52.472 23:40:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:52.472 23:40:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.472 ************************************ 00:03:52.472 START TEST custom_alloc 00:03:52.472 ************************************ 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1117 -- # custom_alloc 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:52.472 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:52.473 23:40:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:52.473 23:40:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.473 23:40:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.677 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:56.677 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.677 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107519240 kB' 'MemAvailable: 111260276 kB' 'Buffers: 4132 kB' 'Cached: 10669692 kB' 'SwapCached: 0 kB' 'Active: 7634348 kB' 'Inactive: 3701320 kB' 'Active(anon): 7142916 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664760 kB' 'Mapped: 201912 kB' 'Shmem: 6481072 kB' 'KReclaimable: 589228 kB' 'Slab: 1469900 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 880672 kB' 'KernelStack: 27680 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8741156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238192 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.678 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.679 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107520428 kB' 'MemAvailable: 111261464 kB' 'Buffers: 4132 kB' 'Cached: 10669696 kB' 'SwapCached: 0 kB' 'Active: 7633772 kB' 'Inactive: 3701320 kB' 'Active(anon): 7142340 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664148 kB' 'Mapped: 201876 kB' 'Shmem: 6481076 kB' 'KReclaimable: 589228 kB' 'Slab: 1469876 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 880648 kB' 'KernelStack: 27632 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8741172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238176 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.680 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.681 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107521284 kB' 'MemAvailable: 111262320 kB' 'Buffers: 4132 kB' 'Cached: 10669712 kB' 'SwapCached: 0 kB' 'Active: 7633436 kB' 'Inactive: 3701320 kB' 'Active(anon): 7142004 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664292 kB' 'Mapped: 201788 kB' 'Shmem: 6481092 kB' 'KReclaimable: 589228 kB' 'Slab: 1469900 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 880672 kB' 'KernelStack: 27632 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8741196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238176 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.682 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.683 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:03:56.684 nr_hugepages=1536 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:56.684 resv_hugepages=0 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:56.684 surplus_hugepages=0 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:56.684 anon_hugepages=0 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.684 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107520528 kB' 'MemAvailable: 111261564 kB' 'Buffers: 4132 kB' 'Cached: 10669752 kB' 'SwapCached: 0 kB' 'Active: 7633164 kB' 'Inactive: 3701320 kB' 'Active(anon): 7141732 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663976 kB' 'Mapped: 201788 kB' 'Shmem: 6481132 kB' 'KReclaimable: 589228 kB' 'Slab: 1469900 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 880672 kB' 'KernelStack: 27648 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8741216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238192 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.685 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.686 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60089376 kB' 'MemUsed: 5569632 kB' 'SwapCached: 0 kB' 'Active: 1899492 kB' 'Inactive: 285896 kB' 'Active(anon): 1741744 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2079588 kB' 'Mapped: 36324 kB' 'AnonPages: 108888 kB' 'Shmem: 1635944 kB' 'KernelStack: 14216 kB' 'PageTables: 3028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 337404 kB' 'Slab: 757668 kB' 'SReclaimable: 337404 kB' 'SUnreclaim: 420264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.687 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.688 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47431280 kB' 'MemUsed: 13248560 kB' 'SwapCached: 0 kB' 'Active: 5733724 kB' 'Inactive: 3415424 kB' 'Active(anon): 5400040 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3415424 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8594332 kB' 'Mapped: 165464 kB' 'AnonPages: 555096 kB' 'Shmem: 4845224 kB' 'KernelStack: 13432 kB' 'PageTables: 5980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 251824 kB' 'Slab: 712232 kB' 'SReclaimable: 251824 kB' 'SUnreclaim: 460408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.689 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:56.690 node0=512 expecting 512 00:03:56.690 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:56.691 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:56.691 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:56.691 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:03:56.691 node1=1024 expecting 1024 00:03:56.691 23:40:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:56.691 00:03:56.691 real 0m3.787s 00:03:56.691 user 0m1.439s 00:03:56.691 sys 0m2.366s 00:03:56.691 23:40:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:56.691 23:40:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.691 ************************************ 00:03:56.691 END TEST custom_alloc 00:03:56.691 ************************************ 00:03:56.691 23:40:11 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:03:56.691 23:40:11 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:56.691 23:40:11 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:56.691 23:40:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:56.691 23:40:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.691 ************************************ 00:03:56.691 START TEST no_shrink_alloc 00:03:56.691 ************************************ 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1117 -- # no_shrink_alloc 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.691 23:40:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:00.899 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:00.899 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.899 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108542552 kB' 'MemAvailable: 112283588 kB' 'Buffers: 4132 kB' 'Cached: 10669880 kB' 'SwapCached: 0 kB' 'Active: 7635180 kB' 'Inactive: 3701320 kB' 'Active(anon): 7143748 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665272 kB' 'Mapped: 201896 kB' 'Shmem: 6481260 kB' 'KReclaimable: 589228 kB' 'Slab: 1469464 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 880236 kB' 'KernelStack: 27664 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8745328 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238352 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.900 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108543316 kB' 'MemAvailable: 112284352 kB' 'Buffers: 4132 kB' 'Cached: 10669880 kB' 'SwapCached: 0 kB' 'Active: 7635892 kB' 'Inactive: 3701320 kB' 'Active(anon): 7144460 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666008 kB' 'Mapped: 201888 kB' 'Shmem: 6481260 kB' 'KReclaimable: 589228 kB' 'Slab: 1469500 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 880272 kB' 'KernelStack: 27792 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8745344 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238288 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.901 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.902 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108544048 kB' 'MemAvailable: 112285084 kB' 'Buffers: 4132 kB' 'Cached: 10669896 kB' 'SwapCached: 0 kB' 'Active: 7635552 kB' 'Inactive: 3701320 kB' 'Active(anon): 7144120 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666080 kB' 'Mapped: 201804 kB' 'Shmem: 6481276 kB' 'KReclaimable: 589228 kB' 'Slab: 1469524 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 880296 kB' 'KernelStack: 27760 kB' 'PageTables: 9084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8745368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238352 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.903 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.904 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:00.905 nr_hugepages=1024 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:00.905 resv_hugepages=0 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:00.905 surplus_hugepages=0 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:00.905 anon_hugepages=0 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108543596 kB' 'MemAvailable: 112284632 kB' 'Buffers: 4132 kB' 'Cached: 10669900 kB' 'SwapCached: 0 kB' 'Active: 7635340 kB' 'Inactive: 3701320 kB' 'Active(anon): 7143908 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 665864 kB' 'Mapped: 201796 kB' 'Shmem: 6481280 kB' 'KReclaimable: 589228 kB' 'Slab: 1469524 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 880296 kB' 'KernelStack: 27712 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8745388 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238336 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.905 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.906 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59057732 kB' 'MemUsed: 6601276 kB' 'SwapCached: 0 kB' 'Active: 1901660 kB' 'Inactive: 285896 kB' 'Active(anon): 1743912 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2079708 kB' 'Mapped: 36324 kB' 'AnonPages: 111008 kB' 'Shmem: 1636064 kB' 'KernelStack: 14200 kB' 'PageTables: 3020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 337404 kB' 'Slab: 757412 kB' 'SReclaimable: 337404 kB' 'SUnreclaim: 420008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.907 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.908 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:00.909 node0=1024 expecting 1024 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.909 23:40:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.205 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:04.205 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:04.205 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108563868 kB' 'MemAvailable: 112304904 kB' 'Buffers: 4132 kB' 'Cached: 10670036 kB' 'SwapCached: 0 kB' 'Active: 7636656 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145224 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666612 kB' 'Mapped: 201920 kB' 'Shmem: 6481416 kB' 'KReclaimable: 589228 kB' 'Slab: 1470420 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 881192 kB' 'KernelStack: 27696 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8743244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238320 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.205 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.471 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108563716 kB' 'MemAvailable: 112304752 kB' 'Buffers: 4132 kB' 'Cached: 10670040 kB' 'SwapCached: 0 kB' 'Active: 7636984 kB' 'Inactive: 3701320 kB' 'Active(anon): 7145552 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666936 kB' 'Mapped: 201920 kB' 'Shmem: 6481420 kB' 'KReclaimable: 589228 kB' 'Slab: 1470420 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 881192 kB' 'KernelStack: 27680 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8743260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238288 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.472 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.473 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108564032 kB' 'MemAvailable: 112305068 kB' 'Buffers: 4132 kB' 'Cached: 10670060 kB' 'SwapCached: 0 kB' 'Active: 7635652 kB' 'Inactive: 3701320 kB' 'Active(anon): 7144220 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666044 kB' 'Mapped: 201820 kB' 'Shmem: 6481440 kB' 'KReclaimable: 589228 kB' 'Slab: 1470388 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 881160 kB' 'KernelStack: 27680 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8743284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238288 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.474 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:04.475 nr_hugepages=1024 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:04.475 resv_hugepages=0 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:04.475 surplus_hugepages=0 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:04.475 anon_hugepages=0 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.475 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108564032 kB' 'MemAvailable: 112305068 kB' 'Buffers: 4132 kB' 'Cached: 10670080 kB' 'SwapCached: 0 kB' 'Active: 7635648 kB' 'Inactive: 3701320 kB' 'Active(anon): 7144216 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701320 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 666044 kB' 'Mapped: 201820 kB' 'Shmem: 6481460 kB' 'KReclaimable: 589228 kB' 'Slab: 1470388 kB' 'SReclaimable: 589228 kB' 'SUnreclaim: 881160 kB' 'KernelStack: 27680 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8743304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238288 kB' 'VmallocChunk: 0 kB' 'Percpu: 150336 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4126068 kB' 'DirectMap2M: 57419776 kB' 'DirectMap1G: 74448896 kB' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.476 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59052796 kB' 'MemUsed: 6606212 kB' 'SwapCached: 0 kB' 'Active: 1902328 kB' 'Inactive: 285896 kB' 'Active(anon): 1744580 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 285896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2079852 kB' 'Mapped: 36324 kB' 'AnonPages: 111576 kB' 'Shmem: 1636208 kB' 'KernelStack: 14264 kB' 'PageTables: 3180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 337404 kB' 'Slab: 757896 kB' 'SReclaimable: 337404 kB' 'SUnreclaim: 420492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.477 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:04.478 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:04.478 node0=1024 expecting 1024 00:04:04.479 23:40:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.479 00:04:04.479 real 0m8.044s 00:04:04.479 user 0m3.055s 00:04:04.479 sys 0m5.113s 00:04:04.479 23:40:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:04.479 23:40:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.479 ************************************ 00:04:04.479 END TEST no_shrink_alloc 00:04:04.479 ************************************ 00:04:04.479 23:40:19 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:04.479 23:40:19 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:04.479 00:04:04.479 real 0m24.383s 00:04:04.479 user 0m9.283s 00:04:04.479 sys 0m15.349s 00:04:04.479 23:40:19 setup.sh.hugepages -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:04.479 23:40:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.479 ************************************ 00:04:04.479 END TEST hugepages 00:04:04.479 ************************************ 00:04:04.479 23:40:19 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:04:04.479 23:40:19 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:04.479 23:40:19 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:04.479 23:40:19 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:04.479 23:40:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:04.479 ************************************ 00:04:04.479 START TEST driver 00:04:04.479 ************************************ 00:04:04.479 23:40:19 setup.sh.driver -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:04.739 * Looking for test storage... 00:04:04.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:04.739 23:40:19 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:04.739 23:40:19 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.739 23:40:19 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.024 23:40:24 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:10.024 23:40:24 setup.sh.driver -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:10.024 23:40:24 setup.sh.driver -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:10.024 23:40:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.024 ************************************ 00:04:10.024 START TEST guess_driver 00:04:10.024 ************************************ 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1117 -- # guess_driver 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:10.024 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.024 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.024 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.024 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.024 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:10.024 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:10.024 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:10.024 Looking for driver=vfio-pci 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.024 23:40:24 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.229 23:40:28 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.531 00:04:19.531 real 0m9.030s 00:04:19.531 user 0m3.028s 00:04:19.531 sys 0m5.253s 00:04:19.531 23:40:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:19.531 23:40:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:19.531 ************************************ 00:04:19.531 END TEST guess_driver 00:04:19.531 ************************************ 00:04:19.531 23:40:33 setup.sh.driver -- common/autotest_common.sh@1136 -- # return 0 00:04:19.531 00:04:19.531 real 0m14.269s 00:04:19.531 user 0m4.610s 00:04:19.531 sys 0m8.142s 00:04:19.531 23:40:33 setup.sh.driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:19.531 23:40:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:19.531 ************************************ 00:04:19.531 END TEST driver 00:04:19.531 ************************************ 00:04:19.531 23:40:33 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:04:19.531 23:40:33 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:19.531 23:40:33 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:19.531 23:40:33 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:19.531 23:40:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.531 ************************************ 00:04:19.531 START TEST devices 00:04:19.531 ************************************ 00:04:19.531 23:40:33 setup.sh.devices -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:19.531 * Looking for test storage... 00:04:19.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:19.531 23:40:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:19.531 23:40:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:19.531 23:40:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.531 23:40:34 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1664 -- # local nvme bdf 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:23.782 23:40:38 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:23.782 23:40:38 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:23.782 No valid GPT data, bailing 00:04:23.782 23:40:38 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.782 23:40:38 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:23.782 23:40:38 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:23.782 23:40:38 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:23.782 23:40:38 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:23.782 23:40:38 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:23.782 23:40:38 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:23.782 23:40:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:23.782 ************************************ 00:04:23.782 START TEST nvme_mount 00:04:23.782 ************************************ 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1117 -- # nvme_mount 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.782 23:40:38 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:24.353 Creating new GPT entries in memory. 00:04:24.353 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.353 other utilities. 00:04:24.353 23:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.353 23:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.353 23:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.353 23:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.353 23:40:39 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.294 Creating new GPT entries in memory. 00:04:25.294 The operation has completed successfully. 00:04:25.294 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:25.294 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.294 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 210715 00:04:25.555 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.556 23:40:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:29.764 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.764 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:29.764 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:29.764 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.764 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.764 23:40:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:33.064 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.325 23:40:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.575 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.575 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.575 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.575 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.575 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.576 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.576 00:04:37.576 real 0m13.736s 00:04:37.576 user 0m4.302s 00:04:37.576 sys 0m7.339s 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:37.576 23:40:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.576 ************************************ 00:04:37.576 END TEST nvme_mount 00:04:37.576 ************************************ 00:04:37.576 23:40:52 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:04:37.576 23:40:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:37.576 23:40:52 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:37.576 23:40:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:37.576 23:40:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.576 ************************************ 00:04:37.576 START TEST dm_mount 00:04:37.576 ************************************ 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1117 -- # dm_mount 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.576 23:40:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:38.148 Creating new GPT entries in memory. 00:04:38.148 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.148 other utilities. 00:04:38.148 23:40:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.148 23:40:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.148 23:40:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.148 23:40:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.148 23:40:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:39.089 Creating new GPT entries in memory. 00:04:39.089 The operation has completed successfully. 00:04:39.089 23:40:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:39.089 23:40:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.089 23:40:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.089 23:40:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.089 23:40:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:40.472 The operation has completed successfully. 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 216276 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.472 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.473 23:40:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.790 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.052 23:40:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.257 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.257 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.257 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.258 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:48.258 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.258 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:48.258 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.258 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.258 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:48.258 00:04:48.258 real 0m10.817s 00:04:48.258 user 0m3.007s 00:04:48.258 sys 0m4.883s 00:04:48.258 23:41:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:48.258 23:41:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:48.258 ************************************ 00:04:48.258 END TEST dm_mount 00:04:48.258 ************************************ 00:04:48.258 23:41:03 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.258 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:48.258 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:48.258 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.258 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.258 23:41:03 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:48.258 00:04:48.258 real 0m29.379s 00:04:48.258 user 0m8.916s 00:04:48.258 sys 0m15.322s 00:04:48.258 23:41:03 setup.sh.devices -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:48.258 23:41:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:48.258 ************************************ 00:04:48.258 END TEST devices 00:04:48.258 ************************************ 00:04:48.258 23:41:03 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:04:48.258 00:04:48.258 real 1m35.585s 00:04:48.258 user 0m32.132s 00:04:48.258 sys 0m54.964s 00:04:48.258 23:41:03 setup.sh -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:48.258 23:41:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.258 ************************************ 00:04:48.258 END TEST setup.sh 00:04:48.258 ************************************ 00:04:48.518 23:41:03 -- common/autotest_common.sh@1136 -- # return 0 00:04:48.518 23:41:03 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:52.727 Hugepages 00:04:52.727 node hugesize free / total 00:04:52.727 node0 1048576kB 0 / 0 00:04:52.727 node0 2048kB 1024 / 1024 00:04:52.727 node1 1048576kB 0 / 0 00:04:52.727 node1 2048kB 1024 / 1024 00:04:52.727 00:04:52.727 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.727 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:52.727 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:52.727 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:52.727 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:52.727 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:52.727 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:52.727 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:52.727 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:52.727 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:52.728 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:52.728 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:52.728 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:52.728 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:52.728 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:52.728 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:52.728 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:52.728 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:52.728 23:41:07 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.728 23:41:07 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.728 23:41:07 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.728 23:41:07 -- common/autotest_common.sh@1525 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.033 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:56.033 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:57.949 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:57.949 23:41:13 -- common/autotest_common.sh@1526 -- # sleep 1 00:04:58.892 23:41:14 -- common/autotest_common.sh@1527 -- # bdfs=() 00:04:58.892 23:41:14 -- common/autotest_common.sh@1527 -- # local bdfs 00:04:58.892 23:41:14 -- common/autotest_common.sh@1528 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.892 23:41:14 -- common/autotest_common.sh@1528 -- # get_nvme_bdfs 00:04:58.893 23:41:14 -- common/autotest_common.sh@1507 -- # bdfs=() 00:04:58.893 23:41:14 -- common/autotest_common.sh@1507 -- # local bdfs 00:04:58.893 23:41:14 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.893 23:41:14 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.893 23:41:14 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:04:59.153 23:41:14 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:04:59.153 23:41:14 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:65:00.0 00:04:59.153 23:41:14 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.364 Waiting for block devices as requested 00:05:03.364 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:03.364 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:03.364 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:03.364 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.364 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.364 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:03.364 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:03.364 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:03.364 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:03.624 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:03.624 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:03.884 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:03.884 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:03.884 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:03.884 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:04.144 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:04.144 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:04.144 23:41:19 -- common/autotest_common.sh@1532 -- # for bdf in "${bdfs[@]}" 00:05:04.144 23:41:19 -- common/autotest_common.sh@1533 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:04.144 23:41:19 -- common/autotest_common.sh@1496 -- # readlink -f /sys/class/nvme/nvme0 00:05:04.144 23:41:19 -- common/autotest_common.sh@1496 -- # grep 0000:65:00.0/nvme/nvme 00:05:04.144 23:41:19 -- common/autotest_common.sh@1496 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:04.144 23:41:19 -- common/autotest_common.sh@1497 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:04.144 23:41:19 -- common/autotest_common.sh@1501 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:04.144 23:41:19 -- common/autotest_common.sh@1501 -- # printf '%s\n' nvme0 00:05:04.144 23:41:19 -- common/autotest_common.sh@1533 -- # nvme_ctrlr=/dev/nvme0 00:05:04.144 23:41:19 -- common/autotest_common.sh@1534 -- # [[ -z /dev/nvme0 ]] 00:05:04.144 23:41:19 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:04.144 23:41:19 -- common/autotest_common.sh@1539 -- # grep oacs 00:05:04.144 23:41:19 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:04.144 23:41:19 -- common/autotest_common.sh@1539 -- # oacs=' 0x5f' 00:05:04.145 23:41:19 -- common/autotest_common.sh@1540 -- # oacs_ns_manage=8 00:05:04.145 23:41:19 -- common/autotest_common.sh@1542 -- # [[ 8 -ne 0 ]] 00:05:04.145 23:41:19 -- common/autotest_common.sh@1548 -- # nvme id-ctrl /dev/nvme0 00:05:04.145 23:41:19 -- common/autotest_common.sh@1548 -- # cut -d: -f2 00:05:04.145 23:41:19 -- common/autotest_common.sh@1548 -- # grep unvmcap 00:05:04.145 23:41:19 -- common/autotest_common.sh@1548 -- # unvmcap=' 0' 00:05:04.145 23:41:19 -- common/autotest_common.sh@1549 -- # [[ 0 -eq 0 ]] 00:05:04.145 23:41:19 -- common/autotest_common.sh@1551 -- # continue 00:05:04.145 23:41:19 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:04.145 23:41:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.145 23:41:19 -- common/autotest_common.sh@10 -- # set +x 00:05:04.145 23:41:19 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:04.145 23:41:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:04.145 23:41:19 -- common/autotest_common.sh@10 -- # set +x 00:05:04.145 23:41:19 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:08.419 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:08.419 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:08.419 23:41:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:08.419 23:41:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.419 23:41:23 -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 23:41:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:08.419 23:41:23 -- common/autotest_common.sh@1585 -- # mapfile -t bdfs 00:05:08.419 23:41:23 -- common/autotest_common.sh@1585 -- # get_nvme_bdfs_by_id 0x0a54 00:05:08.419 23:41:23 -- common/autotest_common.sh@1571 -- # bdfs=() 00:05:08.419 23:41:23 -- common/autotest_common.sh@1571 -- # local bdfs 00:05:08.419 23:41:23 -- common/autotest_common.sh@1573 -- # get_nvme_bdfs 00:05:08.419 23:41:23 -- common/autotest_common.sh@1507 -- # bdfs=() 00:05:08.419 23:41:23 -- common/autotest_common.sh@1507 -- # local bdfs 00:05:08.419 23:41:23 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.419 23:41:23 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.419 23:41:23 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:05:08.419 23:41:23 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:05:08.419 23:41:23 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:65:00.0 00:05:08.419 23:41:23 -- common/autotest_common.sh@1573 -- # for bdf in $(get_nvme_bdfs) 00:05:08.419 23:41:23 -- common/autotest_common.sh@1574 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:08.419 23:41:23 -- common/autotest_common.sh@1574 -- # device=0xa80a 00:05:08.419 23:41:23 -- common/autotest_common.sh@1575 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:08.419 23:41:23 -- common/autotest_common.sh@1580 -- # printf '%s\n' 00:05:08.419 23:41:23 -- common/autotest_common.sh@1586 -- # [[ -z '' ]] 00:05:08.419 23:41:23 -- common/autotest_common.sh@1587 -- # return 0 00:05:08.419 23:41:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:08.419 23:41:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:08.419 23:41:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:08.419 23:41:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:08.419 23:41:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:08.419 23:41:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:08.419 23:41:23 -- common/autotest_common.sh@10 -- # set +x 00:05:08.419 23:41:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:08.419 23:41:23 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:08.419 23:41:23 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:08.419 23:41:23 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:08.420 23:41:23 -- common/autotest_common.sh@10 -- # set +x 00:05:08.420 ************************************ 00:05:08.420 START TEST env 00:05:08.420 ************************************ 00:05:08.420 23:41:23 env -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:08.420 * Looking for test storage... 00:05:08.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:08.420 23:41:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.420 23:41:23 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:08.420 23:41:23 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:08.420 23:41:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.420 ************************************ 00:05:08.420 START TEST env_memory 00:05:08.420 ************************************ 00:05:08.420 23:41:23 env.env_memory -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:08.420 00:05:08.420 00:05:08.420 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.420 http://cunit.sourceforge.net/ 00:05:08.420 00:05:08.420 00:05:08.420 Suite: memory 00:05:08.680 Test: alloc and free memory map ...[2024-07-15 23:41:23.627408] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:08.680 passed 00:05:08.680 Test: mem map translation ...[2024-07-15 23:41:23.652997] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:08.680 [2024-07-15 23:41:23.653030] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:08.680 [2024-07-15 23:41:23.653076] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:08.680 [2024-07-15 23:41:23.653084] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:08.680 passed 00:05:08.680 Test: mem map registration ...[2024-07-15 23:41:23.708358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:08.680 [2024-07-15 23:41:23.708391] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:08.680 passed 00:05:08.680 Test: mem map adjacent registrations ...passed 00:05:08.680 00:05:08.680 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.680 suites 1 1 n/a 0 0 00:05:08.680 tests 4 4 4 0 0 00:05:08.680 asserts 152 152 152 0 n/a 00:05:08.680 00:05:08.680 Elapsed time = 0.194 seconds 00:05:08.680 00:05:08.680 real 0m0.209s 00:05:08.680 user 0m0.198s 00:05:08.680 sys 0m0.010s 00:05:08.680 23:41:23 env.env_memory -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:08.680 23:41:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:08.680 ************************************ 00:05:08.680 END TEST env_memory 00:05:08.680 ************************************ 00:05:08.680 23:41:23 env -- common/autotest_common.sh@1136 -- # return 0 00:05:08.680 23:41:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.680 23:41:23 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:08.680 23:41:23 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:08.680 23:41:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.680 ************************************ 00:05:08.680 START TEST env_vtophys 00:05:08.680 ************************************ 00:05:08.680 23:41:23 env.env_vtophys -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:08.940 EAL: lib.eal log level changed from notice to debug 00:05:08.940 EAL: Detected lcore 0 as core 0 on socket 0 00:05:08.940 EAL: Detected lcore 1 as core 1 on socket 0 00:05:08.940 EAL: Detected lcore 2 as core 2 on socket 0 00:05:08.940 EAL: Detected lcore 3 as core 3 on socket 0 00:05:08.940 EAL: Detected lcore 4 as core 4 on socket 0 00:05:08.940 EAL: Detected lcore 5 as core 5 on socket 0 00:05:08.940 EAL: Detected lcore 6 as core 6 on socket 0 00:05:08.940 EAL: Detected lcore 7 as core 7 on socket 0 00:05:08.940 EAL: Detected lcore 8 as core 8 on socket 0 00:05:08.940 EAL: Detected lcore 9 as core 9 on socket 0 00:05:08.940 EAL: Detected lcore 10 as core 10 on socket 0 00:05:08.940 EAL: Detected lcore 11 as core 11 on socket 0 00:05:08.940 EAL: Detected lcore 12 as core 12 on socket 0 00:05:08.940 EAL: Detected lcore 13 as core 13 on socket 0 00:05:08.940 EAL: Detected lcore 14 as core 14 on socket 0 00:05:08.940 EAL: Detected lcore 15 as core 15 on socket 0 00:05:08.940 EAL: Detected lcore 16 as core 16 on socket 0 00:05:08.940 EAL: Detected lcore 17 as core 17 on socket 0 00:05:08.940 EAL: Detected lcore 18 as core 18 on socket 0 00:05:08.940 EAL: Detected lcore 19 as core 19 on socket 0 00:05:08.940 EAL: Detected lcore 20 as core 20 on socket 0 00:05:08.940 EAL: Detected lcore 21 as core 21 on socket 0 00:05:08.940 EAL: Detected lcore 22 as core 22 on socket 0 00:05:08.940 EAL: Detected lcore 23 as core 23 on socket 0 00:05:08.940 EAL: Detected lcore 24 as core 24 on socket 0 00:05:08.940 EAL: Detected lcore 25 as core 25 on socket 0 00:05:08.940 EAL: Detected lcore 26 as core 26 on socket 0 00:05:08.940 EAL: Detected lcore 27 as core 27 on socket 0 00:05:08.940 EAL: Detected lcore 28 as core 28 on socket 0 00:05:08.940 EAL: Detected lcore 29 as core 29 on socket 0 00:05:08.940 EAL: Detected lcore 30 as core 30 on socket 0 00:05:08.940 EAL: Detected lcore 31 as core 31 on socket 0 00:05:08.940 EAL: Detected lcore 32 as core 32 on socket 0 00:05:08.940 EAL: Detected lcore 33 as core 33 on socket 0 00:05:08.940 EAL: Detected lcore 34 as core 34 on socket 0 00:05:08.940 EAL: Detected lcore 35 as core 35 on socket 0 00:05:08.940 EAL: Detected lcore 36 as core 0 on socket 1 00:05:08.940 EAL: Detected lcore 37 as core 1 on socket 1 00:05:08.940 EAL: Detected lcore 38 as core 2 on socket 1 00:05:08.940 EAL: Detected lcore 39 as core 3 on socket 1 00:05:08.940 EAL: Detected lcore 40 as core 4 on socket 1 00:05:08.940 EAL: Detected lcore 41 as core 5 on socket 1 00:05:08.940 EAL: Detected lcore 42 as core 6 on socket 1 00:05:08.940 EAL: Detected lcore 43 as core 7 on socket 1 00:05:08.940 EAL: Detected lcore 44 as core 8 on socket 1 00:05:08.940 EAL: Detected lcore 45 as core 9 on socket 1 00:05:08.940 EAL: Detected lcore 46 as core 10 on socket 1 00:05:08.940 EAL: Detected lcore 47 as core 11 on socket 1 00:05:08.940 EAL: Detected lcore 48 as core 12 on socket 1 00:05:08.940 EAL: Detected lcore 49 as core 13 on socket 1 00:05:08.940 EAL: Detected lcore 50 as core 14 on socket 1 00:05:08.940 EAL: Detected lcore 51 as core 15 on socket 1 00:05:08.940 EAL: Detected lcore 52 as core 16 on socket 1 00:05:08.940 EAL: Detected lcore 53 as core 17 on socket 1 00:05:08.940 EAL: Detected lcore 54 as core 18 on socket 1 00:05:08.940 EAL: Detected lcore 55 as core 19 on socket 1 00:05:08.940 EAL: Detected lcore 56 as core 20 on socket 1 00:05:08.940 EAL: Detected lcore 57 as core 21 on socket 1 00:05:08.940 EAL: Detected lcore 58 as core 22 on socket 1 00:05:08.940 EAL: Detected lcore 59 as core 23 on socket 1 00:05:08.940 EAL: Detected lcore 60 as core 24 on socket 1 00:05:08.940 EAL: Detected lcore 61 as core 25 on socket 1 00:05:08.940 EAL: Detected lcore 62 as core 26 on socket 1 00:05:08.940 EAL: Detected lcore 63 as core 27 on socket 1 00:05:08.940 EAL: Detected lcore 64 as core 28 on socket 1 00:05:08.940 EAL: Detected lcore 65 as core 29 on socket 1 00:05:08.940 EAL: Detected lcore 66 as core 30 on socket 1 00:05:08.940 EAL: Detected lcore 67 as core 31 on socket 1 00:05:08.940 EAL: Detected lcore 68 as core 32 on socket 1 00:05:08.940 EAL: Detected lcore 69 as core 33 on socket 1 00:05:08.940 EAL: Detected lcore 70 as core 34 on socket 1 00:05:08.940 EAL: Detected lcore 71 as core 35 on socket 1 00:05:08.940 EAL: Detected lcore 72 as core 0 on socket 0 00:05:08.940 EAL: Detected lcore 73 as core 1 on socket 0 00:05:08.940 EAL: Detected lcore 74 as core 2 on socket 0 00:05:08.940 EAL: Detected lcore 75 as core 3 on socket 0 00:05:08.940 EAL: Detected lcore 76 as core 4 on socket 0 00:05:08.940 EAL: Detected lcore 77 as core 5 on socket 0 00:05:08.940 EAL: Detected lcore 78 as core 6 on socket 0 00:05:08.940 EAL: Detected lcore 79 as core 7 on socket 0 00:05:08.940 EAL: Detected lcore 80 as core 8 on socket 0 00:05:08.940 EAL: Detected lcore 81 as core 9 on socket 0 00:05:08.940 EAL: Detected lcore 82 as core 10 on socket 0 00:05:08.940 EAL: Detected lcore 83 as core 11 on socket 0 00:05:08.940 EAL: Detected lcore 84 as core 12 on socket 0 00:05:08.940 EAL: Detected lcore 85 as core 13 on socket 0 00:05:08.940 EAL: Detected lcore 86 as core 14 on socket 0 00:05:08.940 EAL: Detected lcore 87 as core 15 on socket 0 00:05:08.940 EAL: Detected lcore 88 as core 16 on socket 0 00:05:08.940 EAL: Detected lcore 89 as core 17 on socket 0 00:05:08.940 EAL: Detected lcore 90 as core 18 on socket 0 00:05:08.940 EAL: Detected lcore 91 as core 19 on socket 0 00:05:08.940 EAL: Detected lcore 92 as core 20 on socket 0 00:05:08.940 EAL: Detected lcore 93 as core 21 on socket 0 00:05:08.940 EAL: Detected lcore 94 as core 22 on socket 0 00:05:08.940 EAL: Detected lcore 95 as core 23 on socket 0 00:05:08.940 EAL: Detected lcore 96 as core 24 on socket 0 00:05:08.940 EAL: Detected lcore 97 as core 25 on socket 0 00:05:08.940 EAL: Detected lcore 98 as core 26 on socket 0 00:05:08.940 EAL: Detected lcore 99 as core 27 on socket 0 00:05:08.940 EAL: Detected lcore 100 as core 28 on socket 0 00:05:08.940 EAL: Detected lcore 101 as core 29 on socket 0 00:05:08.940 EAL: Detected lcore 102 as core 30 on socket 0 00:05:08.940 EAL: Detected lcore 103 as core 31 on socket 0 00:05:08.940 EAL: Detected lcore 104 as core 32 on socket 0 00:05:08.940 EAL: Detected lcore 105 as core 33 on socket 0 00:05:08.940 EAL: Detected lcore 106 as core 34 on socket 0 00:05:08.940 EAL: Detected lcore 107 as core 35 on socket 0 00:05:08.940 EAL: Detected lcore 108 as core 0 on socket 1 00:05:08.940 EAL: Detected lcore 109 as core 1 on socket 1 00:05:08.940 EAL: Detected lcore 110 as core 2 on socket 1 00:05:08.940 EAL: Detected lcore 111 as core 3 on socket 1 00:05:08.940 EAL: Detected lcore 112 as core 4 on socket 1 00:05:08.940 EAL: Detected lcore 113 as core 5 on socket 1 00:05:08.940 EAL: Detected lcore 114 as core 6 on socket 1 00:05:08.940 EAL: Detected lcore 115 as core 7 on socket 1 00:05:08.940 EAL: Detected lcore 116 as core 8 on socket 1 00:05:08.940 EAL: Detected lcore 117 as core 9 on socket 1 00:05:08.940 EAL: Detected lcore 118 as core 10 on socket 1 00:05:08.940 EAL: Detected lcore 119 as core 11 on socket 1 00:05:08.940 EAL: Detected lcore 120 as core 12 on socket 1 00:05:08.940 EAL: Detected lcore 121 as core 13 on socket 1 00:05:08.940 EAL: Detected lcore 122 as core 14 on socket 1 00:05:08.940 EAL: Detected lcore 123 as core 15 on socket 1 00:05:08.940 EAL: Detected lcore 124 as core 16 on socket 1 00:05:08.940 EAL: Detected lcore 125 as core 17 on socket 1 00:05:08.940 EAL: Detected lcore 126 as core 18 on socket 1 00:05:08.940 EAL: Detected lcore 127 as core 19 on socket 1 00:05:08.940 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:08.940 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:08.940 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:08.940 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:08.940 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:08.940 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:08.940 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:08.940 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:08.940 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:08.940 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:08.940 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:08.940 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:08.940 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:08.940 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:08.940 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:08.940 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:08.940 EAL: Maximum logical cores by configuration: 128 00:05:08.940 EAL: Detected CPU lcores: 128 00:05:08.940 EAL: Detected NUMA nodes: 2 00:05:08.940 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:08.940 EAL: Detected shared linkage of DPDK 00:05:08.940 EAL: No shared files mode enabled, IPC will be disabled 00:05:08.940 EAL: Bus pci wants IOVA as 'DC' 00:05:08.940 EAL: Buses did not request a specific IOVA mode. 00:05:08.940 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:08.940 EAL: Selected IOVA mode 'VA' 00:05:08.940 EAL: Probing VFIO support... 00:05:08.940 EAL: IOMMU type 1 (Type 1) is supported 00:05:08.940 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:08.940 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:08.940 EAL: VFIO support initialized 00:05:08.940 EAL: Ask a virtual area of 0x2e000 bytes 00:05:08.940 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:08.940 EAL: Setting up physically contiguous memory... 00:05:08.940 EAL: Setting maximum number of open files to 524288 00:05:08.940 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:08.940 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:08.940 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:08.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.940 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:08.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.940 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:08.940 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:08.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.940 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:08.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.940 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:08.940 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:08.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.940 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:08.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.940 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:08.940 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:08.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.940 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:08.940 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:08.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.940 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:08.940 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:08.940 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:08.940 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.940 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:08.940 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.940 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.941 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:08.941 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:08.941 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.941 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:08.941 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.941 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.941 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:08.941 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:08.941 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.941 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:08.941 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.941 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.941 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:08.941 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:08.941 EAL: Ask a virtual area of 0x61000 bytes 00:05:08.941 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:08.941 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:08.941 EAL: Ask a virtual area of 0x400000000 bytes 00:05:08.941 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:08.941 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:08.941 EAL: Hugepages will be freed exactly as allocated. 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: TSC frequency is ~2400000 KHz 00:05:08.941 EAL: Main lcore 0 is ready (tid=7f76fdb85a00;cpuset=[0]) 00:05:08.941 EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.941 EAL: Restoring previous memory policy: 0 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was expanded by 2MB 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:08.941 EAL: Mem event callback 'spdk:(nil)' registered 00:05:08.941 00:05:08.941 00:05:08.941 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.941 http://cunit.sourceforge.net/ 00:05:08.941 00:05:08.941 00:05:08.941 Suite: components_suite 00:05:08.941 Test: vtophys_malloc_test ...passed 00:05:08.941 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.941 EAL: Restoring previous memory policy: 4 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was expanded by 4MB 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was shrunk by 4MB 00:05:08.941 EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.941 EAL: Restoring previous memory policy: 4 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was expanded by 6MB 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was shrunk by 6MB 00:05:08.941 EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.941 EAL: Restoring previous memory policy: 4 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was expanded by 10MB 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was shrunk by 10MB 00:05:08.941 EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.941 EAL: Restoring previous memory policy: 4 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was expanded by 18MB 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was shrunk by 18MB 00:05:08.941 EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.941 EAL: Restoring previous memory policy: 4 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was expanded by 34MB 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was shrunk by 34MB 00:05:08.941 EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.941 EAL: Restoring previous memory policy: 4 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was expanded by 66MB 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was shrunk by 66MB 00:05:08.941 EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.941 EAL: Restoring previous memory policy: 4 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was expanded by 130MB 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was shrunk by 130MB 00:05:08.941 EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.941 EAL: Restoring previous memory policy: 4 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.941 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.941 EAL: request: mp_malloc_sync 00:05:08.941 EAL: No shared files mode enabled, IPC is disabled 00:05:08.941 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.941 EAL: Trying to obtain current memory policy. 00:05:08.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.200 EAL: Restoring previous memory policy: 4 00:05:09.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.200 EAL: request: mp_malloc_sync 00:05:09.200 EAL: No shared files mode enabled, IPC is disabled 00:05:09.200 EAL: Heap on socket 0 was expanded by 514MB 00:05:09.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.200 EAL: request: mp_malloc_sync 00:05:09.200 EAL: No shared files mode enabled, IPC is disabled 00:05:09.200 EAL: Heap on socket 0 was shrunk by 514MB 00:05:09.200 EAL: Trying to obtain current memory policy. 00:05:09.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.459 EAL: Restoring previous memory policy: 4 00:05:09.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.459 EAL: request: mp_malloc_sync 00:05:09.459 EAL: No shared files mode enabled, IPC is disabled 00:05:09.459 EAL: Heap on socket 0 was expanded by 1026MB 00:05:09.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.459 EAL: request: mp_malloc_sync 00:05:09.459 EAL: No shared files mode enabled, IPC is disabled 00:05:09.459 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:09.459 passed 00:05:09.459 00:05:09.459 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.459 suites 1 1 n/a 0 0 00:05:09.459 tests 2 2 2 0 0 00:05:09.459 asserts 497 497 497 0 n/a 00:05:09.459 00:05:09.459 Elapsed time = 0.661 seconds 00:05:09.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.459 EAL: request: mp_malloc_sync 00:05:09.459 EAL: No shared files mode enabled, IPC is disabled 00:05:09.459 EAL: Heap on socket 0 was shrunk by 2MB 00:05:09.459 EAL: No shared files mode enabled, IPC is disabled 00:05:09.459 EAL: No shared files mode enabled, IPC is disabled 00:05:09.459 EAL: No shared files mode enabled, IPC is disabled 00:05:09.459 00:05:09.459 real 0m0.787s 00:05:09.459 user 0m0.413s 00:05:09.459 sys 0m0.348s 00:05:09.459 23:41:24 env.env_vtophys -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:09.719 23:41:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:09.719 ************************************ 00:05:09.719 END TEST env_vtophys 00:05:09.719 ************************************ 00:05:09.719 23:41:24 env -- common/autotest_common.sh@1136 -- # return 0 00:05:09.719 23:41:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.719 23:41:24 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:09.719 23:41:24 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:09.719 23:41:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.719 ************************************ 00:05:09.719 START TEST env_pci 00:05:09.719 ************************************ 00:05:09.719 23:41:24 env.env_pci -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:09.719 00:05:09.719 00:05:09.719 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.719 http://cunit.sourceforge.net/ 00:05:09.719 00:05:09.719 00:05:09.719 Suite: pci 00:05:09.719 Test: pci_hook ...[2024-07-15 23:41:24.739393] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 228394 has claimed it 00:05:09.719 EAL: Cannot find device (10000:00:01.0) 00:05:09.719 EAL: Failed to attach device on primary process 00:05:09.719 passed 00:05:09.719 00:05:09.719 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.719 suites 1 1 n/a 0 0 00:05:09.719 tests 1 1 1 0 0 00:05:09.719 asserts 25 25 25 0 n/a 00:05:09.719 00:05:09.719 Elapsed time = 0.033 seconds 00:05:09.719 00:05:09.719 real 0m0.053s 00:05:09.719 user 0m0.020s 00:05:09.719 sys 0m0.033s 00:05:09.719 23:41:24 env.env_pci -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:09.719 23:41:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:09.719 ************************************ 00:05:09.719 END TEST env_pci 00:05:09.719 ************************************ 00:05:09.719 23:41:24 env -- common/autotest_common.sh@1136 -- # return 0 00:05:09.719 23:41:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:09.719 23:41:24 env -- env/env.sh@15 -- # uname 00:05:09.719 23:41:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:09.719 23:41:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:09.719 23:41:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.719 23:41:24 env -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:05:09.719 23:41:24 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:09.719 23:41:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.719 ************************************ 00:05:09.719 START TEST env_dpdk_post_init 00:05:09.719 ************************************ 00:05:09.719 23:41:24 env.env_dpdk_post_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.719 EAL: Detected CPU lcores: 128 00:05:09.719 EAL: Detected NUMA nodes: 2 00:05:09.719 EAL: Detected shared linkage of DPDK 00:05:09.719 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.980 EAL: Selected IOVA mode 'VA' 00:05:09.980 EAL: VFIO support initialized 00:05:09.980 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.980 EAL: Using IOMMU type 1 (Type 1) 00:05:09.980 EAL: Ignore mapping IO port bar(1) 00:05:10.290 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:10.290 EAL: Ignore mapping IO port bar(1) 00:05:10.550 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:10.550 EAL: Ignore mapping IO port bar(1) 00:05:10.550 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:10.812 EAL: Ignore mapping IO port bar(1) 00:05:10.812 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:10.812 EAL: Ignore mapping IO port bar(1) 00:05:11.072 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:11.072 EAL: Ignore mapping IO port bar(1) 00:05:11.333 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:11.333 EAL: Ignore mapping IO port bar(1) 00:05:11.592 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:11.592 EAL: Ignore mapping IO port bar(1) 00:05:11.592 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:11.853 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:12.113 EAL: Ignore mapping IO port bar(1) 00:05:12.113 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:12.373 EAL: Ignore mapping IO port bar(1) 00:05:12.373 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:12.633 EAL: Ignore mapping IO port bar(1) 00:05:12.633 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:12.633 EAL: Ignore mapping IO port bar(1) 00:05:12.893 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:12.893 EAL: Ignore mapping IO port bar(1) 00:05:13.153 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:13.153 EAL: Ignore mapping IO port bar(1) 00:05:13.153 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:13.414 EAL: Ignore mapping IO port bar(1) 00:05:13.414 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:13.674 EAL: Ignore mapping IO port bar(1) 00:05:13.674 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:13.674 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:13.674 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:13.935 Starting DPDK initialization... 00:05:13.935 Starting SPDK post initialization... 00:05:13.935 SPDK NVMe probe 00:05:13.935 Attaching to 0000:65:00.0 00:05:13.935 Attached to 0000:65:00.0 00:05:13.935 Cleaning up... 00:05:15.870 00:05:15.870 real 0m5.729s 00:05:15.870 user 0m0.190s 00:05:15.870 sys 0m0.078s 00:05:15.870 23:41:30 env.env_dpdk_post_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:15.870 23:41:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.870 ************************************ 00:05:15.870 END TEST env_dpdk_post_init 00:05:15.870 ************************************ 00:05:15.870 23:41:30 env -- common/autotest_common.sh@1136 -- # return 0 00:05:15.870 23:41:30 env -- env/env.sh@26 -- # uname 00:05:15.870 23:41:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.870 23:41:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.870 23:41:30 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:15.870 23:41:30 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:15.870 23:41:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.870 ************************************ 00:05:15.870 START TEST env_mem_callbacks 00:05:15.870 ************************************ 00:05:15.870 23:41:30 env.env_mem_callbacks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.870 EAL: Detected CPU lcores: 128 00:05:15.870 EAL: Detected NUMA nodes: 2 00:05:15.870 EAL: Detected shared linkage of DPDK 00:05:15.870 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.870 EAL: Selected IOVA mode 'VA' 00:05:15.870 EAL: VFIO support initialized 00:05:15.870 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.870 00:05:15.870 00:05:15.870 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.870 http://cunit.sourceforge.net/ 00:05:15.870 00:05:15.870 00:05:15.870 Suite: memory 00:05:15.870 Test: test ... 00:05:15.870 register 0x200000200000 2097152 00:05:15.870 malloc 3145728 00:05:15.870 register 0x200000400000 4194304 00:05:15.870 buf 0x200000500000 len 3145728 PASSED 00:05:15.870 malloc 64 00:05:15.870 buf 0x2000004fff40 len 64 PASSED 00:05:15.870 malloc 4194304 00:05:15.870 register 0x200000800000 6291456 00:05:15.870 buf 0x200000a00000 len 4194304 PASSED 00:05:15.870 free 0x200000500000 3145728 00:05:15.870 free 0x2000004fff40 64 00:05:15.870 unregister 0x200000400000 4194304 PASSED 00:05:15.870 free 0x200000a00000 4194304 00:05:15.870 unregister 0x200000800000 6291456 PASSED 00:05:15.870 malloc 8388608 00:05:15.870 register 0x200000400000 10485760 00:05:15.870 buf 0x200000600000 len 8388608 PASSED 00:05:15.870 free 0x200000600000 8388608 00:05:15.870 unregister 0x200000400000 10485760 PASSED 00:05:15.870 passed 00:05:15.870 00:05:15.870 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.870 suites 1 1 n/a 0 0 00:05:15.870 tests 1 1 1 0 0 00:05:15.870 asserts 15 15 15 0 n/a 00:05:15.870 00:05:15.870 Elapsed time = 0.007 seconds 00:05:15.870 00:05:15.870 real 0m0.065s 00:05:15.870 user 0m0.025s 00:05:15.870 sys 0m0.040s 00:05:15.870 23:41:30 env.env_mem_callbacks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:15.870 23:41:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:15.870 ************************************ 00:05:15.870 END TEST env_mem_callbacks 00:05:15.870 ************************************ 00:05:15.870 23:41:30 env -- common/autotest_common.sh@1136 -- # return 0 00:05:15.870 00:05:15.870 real 0m7.338s 00:05:15.870 user 0m1.027s 00:05:15.870 sys 0m0.852s 00:05:15.870 23:41:30 env -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:15.870 23:41:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.870 ************************************ 00:05:15.870 END TEST env 00:05:15.870 ************************************ 00:05:15.870 23:41:30 -- common/autotest_common.sh@1136 -- # return 0 00:05:15.870 23:41:30 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.870 23:41:30 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:15.870 23:41:30 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:15.870 23:41:30 -- common/autotest_common.sh@10 -- # set +x 00:05:15.870 ************************************ 00:05:15.870 START TEST rpc 00:05:15.870 ************************************ 00:05:15.870 23:41:30 rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.870 * Looking for test storage... 00:05:15.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.870 23:41:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=229836 00:05:15.870 23:41:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.870 23:41:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:15.870 23:41:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 229836 00:05:15.870 23:41:30 rpc -- common/autotest_common.sh@823 -- # '[' -z 229836 ']' 00:05:15.870 23:41:30 rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.870 23:41:30 rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:15.870 23:41:30 rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.870 23:41:30 rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:15.870 23:41:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.870 [2024-07-15 23:41:31.020334] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:15.870 [2024-07-15 23:41:31.020391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid229836 ] 00:05:16.130 [2024-07-15 23:41:31.094348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.130 [2024-07-15 23:41:31.169029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:16.130 [2024-07-15 23:41:31.169070] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 229836' to capture a snapshot of events at runtime. 00:05:16.130 [2024-07-15 23:41:31.169081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.130 [2024-07-15 23:41:31.169088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.130 [2024-07-15 23:41:31.169094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid229836 for offline analysis/debug. 00:05:16.130 [2024-07-15 23:41:31.169116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.828 23:41:31 rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:16.828 23:41:31 rpc -- common/autotest_common.sh@856 -- # return 0 00:05:16.828 23:41:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.828 23:41:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.828 23:41:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:16.828 23:41:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:16.828 23:41:31 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:16.828 23:41:31 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:16.828 23:41:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.828 ************************************ 00:05:16.828 START TEST rpc_integrity 00:05:16.828 ************************************ 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.828 { 00:05:16.828 "name": "Malloc0", 00:05:16.828 "aliases": [ 00:05:16.828 "641fdc61-f4eb-4d1c-a477-3b305d7da13f" 00:05:16.828 ], 00:05:16.828 "product_name": "Malloc disk", 00:05:16.828 "block_size": 512, 00:05:16.828 "num_blocks": 16384, 00:05:16.828 "uuid": "641fdc61-f4eb-4d1c-a477-3b305d7da13f", 00:05:16.828 "assigned_rate_limits": { 00:05:16.828 "rw_ios_per_sec": 0, 00:05:16.828 "rw_mbytes_per_sec": 0, 00:05:16.828 "r_mbytes_per_sec": 0, 00:05:16.828 "w_mbytes_per_sec": 0 00:05:16.828 }, 00:05:16.828 "claimed": false, 00:05:16.828 "zoned": false, 00:05:16.828 "supported_io_types": { 00:05:16.828 "read": true, 00:05:16.828 "write": true, 00:05:16.828 "unmap": true, 00:05:16.828 "flush": true, 00:05:16.828 "reset": true, 00:05:16.828 "nvme_admin": false, 00:05:16.828 "nvme_io": false, 00:05:16.828 "nvme_io_md": false, 00:05:16.828 "write_zeroes": true, 00:05:16.828 "zcopy": true, 00:05:16.828 "get_zone_info": false, 00:05:16.828 "zone_management": false, 00:05:16.828 "zone_append": false, 00:05:16.828 "compare": false, 00:05:16.828 "compare_and_write": false, 00:05:16.828 "abort": true, 00:05:16.828 "seek_hole": false, 00:05:16.828 "seek_data": false, 00:05:16.828 "copy": true, 00:05:16.828 "nvme_iov_md": false 00:05:16.828 }, 00:05:16.828 "memory_domains": [ 00:05:16.828 { 00:05:16.828 "dma_device_id": "system", 00:05:16.828 "dma_device_type": 1 00:05:16.828 }, 00:05:16.828 { 00:05:16.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.828 "dma_device_type": 2 00:05:16.828 } 00:05:16.828 ], 00:05:16.828 "driver_specific": {} 00:05:16.828 } 00:05:16.828 ]' 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.828 [2024-07-15 23:41:31.967234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.828 [2024-07-15 23:41:31.967269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.828 [2024-07-15 23:41:31.967282] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xac4a10 00:05:16.828 [2024-07-15 23:41:31.967289] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.828 [2024-07-15 23:41:31.968627] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.828 [2024-07-15 23:41:31.968648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.828 Passthru0 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.828 23:41:31 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.828 { 00:05:16.828 "name": "Malloc0", 00:05:16.828 "aliases": [ 00:05:16.828 "641fdc61-f4eb-4d1c-a477-3b305d7da13f" 00:05:16.828 ], 00:05:16.828 "product_name": "Malloc disk", 00:05:16.828 "block_size": 512, 00:05:16.828 "num_blocks": 16384, 00:05:16.828 "uuid": "641fdc61-f4eb-4d1c-a477-3b305d7da13f", 00:05:16.828 "assigned_rate_limits": { 00:05:16.828 "rw_ios_per_sec": 0, 00:05:16.828 "rw_mbytes_per_sec": 0, 00:05:16.828 "r_mbytes_per_sec": 0, 00:05:16.828 "w_mbytes_per_sec": 0 00:05:16.828 }, 00:05:16.828 "claimed": true, 00:05:16.828 "claim_type": "exclusive_write", 00:05:16.828 "zoned": false, 00:05:16.828 "supported_io_types": { 00:05:16.828 "read": true, 00:05:16.828 "write": true, 00:05:16.828 "unmap": true, 00:05:16.828 "flush": true, 00:05:16.828 "reset": true, 00:05:16.828 "nvme_admin": false, 00:05:16.828 "nvme_io": false, 00:05:16.828 "nvme_io_md": false, 00:05:16.828 "write_zeroes": true, 00:05:16.828 "zcopy": true, 00:05:16.828 "get_zone_info": false, 00:05:16.828 "zone_management": false, 00:05:16.828 "zone_append": false, 00:05:16.828 "compare": false, 00:05:16.828 "compare_and_write": false, 00:05:16.828 "abort": true, 00:05:16.828 "seek_hole": false, 00:05:16.828 "seek_data": false, 00:05:16.828 "copy": true, 00:05:16.828 "nvme_iov_md": false 00:05:16.828 }, 00:05:16.828 "memory_domains": [ 00:05:16.828 { 00:05:16.828 "dma_device_id": "system", 00:05:16.828 "dma_device_type": 1 00:05:16.828 }, 00:05:16.828 { 00:05:16.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.828 "dma_device_type": 2 00:05:16.828 } 00:05:16.828 ], 00:05:16.828 "driver_specific": {} 00:05:16.828 }, 00:05:16.828 { 00:05:16.828 "name": "Passthru0", 00:05:16.828 "aliases": [ 00:05:16.828 "c3d547a9-ba2a-5e8b-91af-4f70288ddea1" 00:05:16.828 ], 00:05:16.828 "product_name": "passthru", 00:05:16.828 "block_size": 512, 00:05:16.828 "num_blocks": 16384, 00:05:16.828 "uuid": "c3d547a9-ba2a-5e8b-91af-4f70288ddea1", 00:05:16.828 "assigned_rate_limits": { 00:05:16.828 "rw_ios_per_sec": 0, 00:05:16.828 "rw_mbytes_per_sec": 0, 00:05:16.828 "r_mbytes_per_sec": 0, 00:05:16.828 "w_mbytes_per_sec": 0 00:05:16.828 }, 00:05:16.828 "claimed": false, 00:05:16.828 "zoned": false, 00:05:16.828 "supported_io_types": { 00:05:16.828 "read": true, 00:05:16.828 "write": true, 00:05:16.828 "unmap": true, 00:05:16.828 "flush": true, 00:05:16.828 "reset": true, 00:05:16.828 "nvme_admin": false, 00:05:16.828 "nvme_io": false, 00:05:16.828 "nvme_io_md": false, 00:05:16.828 "write_zeroes": true, 00:05:16.828 "zcopy": true, 00:05:16.828 "get_zone_info": false, 00:05:16.828 "zone_management": false, 00:05:16.828 "zone_append": false, 00:05:16.828 "compare": false, 00:05:16.828 "compare_and_write": false, 00:05:16.828 "abort": true, 00:05:16.828 "seek_hole": false, 00:05:16.828 "seek_data": false, 00:05:16.828 "copy": true, 00:05:16.828 "nvme_iov_md": false 00:05:16.828 }, 00:05:16.828 "memory_domains": [ 00:05:16.828 { 00:05:16.828 "dma_device_id": "system", 00:05:16.828 "dma_device_type": 1 00:05:16.828 }, 00:05:16.828 { 00:05:16.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.828 "dma_device_type": 2 00:05:16.828 } 00:05:16.828 ], 00:05:16.828 "driver_specific": { 00:05:16.828 "passthru": { 00:05:16.828 "name": "Passthru0", 00:05:16.828 "base_bdev_name": "Malloc0" 00:05:16.828 } 00:05:16.828 } 00:05:16.828 } 00:05:16.828 ]' 00:05:16.828 23:41:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.110 23:41:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.110 23:41:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.110 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.110 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.110 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.110 23:41:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.110 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.110 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.110 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.110 23:41:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.111 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.111 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.111 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.111 23:41:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.111 23:41:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.111 23:41:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.111 00:05:17.111 real 0m0.292s 00:05:17.111 user 0m0.188s 00:05:17.111 sys 0m0.038s 00:05:17.111 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.111 23:41:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.111 ************************************ 00:05:17.111 END TEST rpc_integrity 00:05:17.111 ************************************ 00:05:17.111 23:41:32 rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:17.111 23:41:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:17.111 23:41:32 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:17.111 23:41:32 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:17.111 23:41:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.111 ************************************ 00:05:17.111 START TEST rpc_plugins 00:05:17.111 ************************************ 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@1117 -- # rpc_plugins 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:17.111 { 00:05:17.111 "name": "Malloc1", 00:05:17.111 "aliases": [ 00:05:17.111 "b831ebf4-b3ca-43b9-93ac-47726f90d2cd" 00:05:17.111 ], 00:05:17.111 "product_name": "Malloc disk", 00:05:17.111 "block_size": 4096, 00:05:17.111 "num_blocks": 256, 00:05:17.111 "uuid": "b831ebf4-b3ca-43b9-93ac-47726f90d2cd", 00:05:17.111 "assigned_rate_limits": { 00:05:17.111 "rw_ios_per_sec": 0, 00:05:17.111 "rw_mbytes_per_sec": 0, 00:05:17.111 "r_mbytes_per_sec": 0, 00:05:17.111 "w_mbytes_per_sec": 0 00:05:17.111 }, 00:05:17.111 "claimed": false, 00:05:17.111 "zoned": false, 00:05:17.111 "supported_io_types": { 00:05:17.111 "read": true, 00:05:17.111 "write": true, 00:05:17.111 "unmap": true, 00:05:17.111 "flush": true, 00:05:17.111 "reset": true, 00:05:17.111 "nvme_admin": false, 00:05:17.111 "nvme_io": false, 00:05:17.111 "nvme_io_md": false, 00:05:17.111 "write_zeroes": true, 00:05:17.111 "zcopy": true, 00:05:17.111 "get_zone_info": false, 00:05:17.111 "zone_management": false, 00:05:17.111 "zone_append": false, 00:05:17.111 "compare": false, 00:05:17.111 "compare_and_write": false, 00:05:17.111 "abort": true, 00:05:17.111 "seek_hole": false, 00:05:17.111 "seek_data": false, 00:05:17.111 "copy": true, 00:05:17.111 "nvme_iov_md": false 00:05:17.111 }, 00:05:17.111 "memory_domains": [ 00:05:17.111 { 00:05:17.111 "dma_device_id": "system", 00:05:17.111 "dma_device_type": 1 00:05:17.111 }, 00:05:17.111 { 00:05:17.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.111 "dma_device_type": 2 00:05:17.111 } 00:05:17.111 ], 00:05:17.111 "driver_specific": {} 00:05:17.111 } 00:05:17.111 ]' 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.111 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:17.111 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:17.372 23:41:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:17.372 00:05:17.372 real 0m0.150s 00:05:17.372 user 0m0.093s 00:05:17.372 sys 0m0.022s 00:05:17.372 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.372 23:41:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.372 ************************************ 00:05:17.372 END TEST rpc_plugins 00:05:17.372 ************************************ 00:05:17.372 23:41:32 rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:17.372 23:41:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.372 23:41:32 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:17.372 23:41:32 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:17.372 23:41:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.372 ************************************ 00:05:17.372 START TEST rpc_trace_cmd_test 00:05:17.372 ************************************ 00:05:17.372 23:41:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1117 -- # rpc_trace_cmd_test 00:05:17.372 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:17.372 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.372 23:41:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.372 23:41:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.372 23:41:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.372 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:17.372 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid229836", 00:05:17.372 "tpoint_group_mask": "0x8", 00:05:17.372 "iscsi_conn": { 00:05:17.372 "mask": "0x2", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.372 "scsi": { 00:05:17.372 "mask": "0x4", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.372 "bdev": { 00:05:17.372 "mask": "0x8", 00:05:17.372 "tpoint_mask": "0xffffffffffffffff" 00:05:17.372 }, 00:05:17.372 "nvmf_rdma": { 00:05:17.372 "mask": "0x10", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.372 "nvmf_tcp": { 00:05:17.372 "mask": "0x20", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.372 "ftl": { 00:05:17.372 "mask": "0x40", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.372 "blobfs": { 00:05:17.372 "mask": "0x80", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.372 "dsa": { 00:05:17.372 "mask": "0x200", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.372 "thread": { 00:05:17.372 "mask": "0x400", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.372 "nvme_pcie": { 00:05:17.372 "mask": "0x800", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.372 "iaa": { 00:05:17.372 "mask": "0x1000", 00:05:17.372 "tpoint_mask": "0x0" 00:05:17.372 }, 00:05:17.373 "nvme_tcp": { 00:05:17.373 "mask": "0x2000", 00:05:17.373 "tpoint_mask": "0x0" 00:05:17.373 }, 00:05:17.373 "bdev_nvme": { 00:05:17.373 "mask": "0x4000", 00:05:17.373 "tpoint_mask": "0x0" 00:05:17.373 }, 00:05:17.373 "sock": { 00:05:17.373 "mask": "0x8000", 00:05:17.373 "tpoint_mask": "0x0" 00:05:17.373 } 00:05:17.373 }' 00:05:17.373 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:17.373 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:17.373 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.373 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.373 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.633 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.633 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.633 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.633 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.633 23:41:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.633 00:05:17.633 real 0m0.247s 00:05:17.633 user 0m0.206s 00:05:17.633 sys 0m0.033s 00:05:17.633 23:41:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.633 23:41:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.633 ************************************ 00:05:17.633 END TEST rpc_trace_cmd_test 00:05:17.633 ************************************ 00:05:17.633 23:41:32 rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:17.633 23:41:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.633 23:41:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.633 23:41:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.633 23:41:32 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:17.633 23:41:32 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:17.633 23:41:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.634 ************************************ 00:05:17.634 START TEST rpc_daemon_integrity 00:05:17.634 ************************************ 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.634 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.896 { 00:05:17.896 "name": "Malloc2", 00:05:17.896 "aliases": [ 00:05:17.896 "d5d1d9db-55b4-4aa0-af86-8db4ad05f062" 00:05:17.896 ], 00:05:17.896 "product_name": "Malloc disk", 00:05:17.896 "block_size": 512, 00:05:17.896 "num_blocks": 16384, 00:05:17.896 "uuid": "d5d1d9db-55b4-4aa0-af86-8db4ad05f062", 00:05:17.896 "assigned_rate_limits": { 00:05:17.896 "rw_ios_per_sec": 0, 00:05:17.896 "rw_mbytes_per_sec": 0, 00:05:17.896 "r_mbytes_per_sec": 0, 00:05:17.896 "w_mbytes_per_sec": 0 00:05:17.896 }, 00:05:17.896 "claimed": false, 00:05:17.896 "zoned": false, 00:05:17.896 "supported_io_types": { 00:05:17.896 "read": true, 00:05:17.896 "write": true, 00:05:17.896 "unmap": true, 00:05:17.896 "flush": true, 00:05:17.896 "reset": true, 00:05:17.896 "nvme_admin": false, 00:05:17.896 "nvme_io": false, 00:05:17.896 "nvme_io_md": false, 00:05:17.896 "write_zeroes": true, 00:05:17.896 "zcopy": true, 00:05:17.896 "get_zone_info": false, 00:05:17.896 "zone_management": false, 00:05:17.896 "zone_append": false, 00:05:17.896 "compare": false, 00:05:17.896 "compare_and_write": false, 00:05:17.896 "abort": true, 00:05:17.896 "seek_hole": false, 00:05:17.896 "seek_data": false, 00:05:17.896 "copy": true, 00:05:17.896 "nvme_iov_md": false 00:05:17.896 }, 00:05:17.896 "memory_domains": [ 00:05:17.896 { 00:05:17.896 "dma_device_id": "system", 00:05:17.896 "dma_device_type": 1 00:05:17.896 }, 00:05:17.896 { 00:05:17.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.896 "dma_device_type": 2 00:05:17.896 } 00:05:17.896 ], 00:05:17.896 "driver_specific": {} 00:05:17.896 } 00:05:17.896 ]' 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.896 [2024-07-15 23:41:32.881763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:17.896 [2024-07-15 23:41:32.881795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.896 [2024-07-15 23:41:32.881810] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc5bfe0 00:05:17.896 [2024-07-15 23:41:32.881817] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.896 [2024-07-15 23:41:32.883035] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.896 [2024-07-15 23:41:32.883056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.896 Passthru0 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.896 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.896 { 00:05:17.896 "name": "Malloc2", 00:05:17.896 "aliases": [ 00:05:17.896 "d5d1d9db-55b4-4aa0-af86-8db4ad05f062" 00:05:17.896 ], 00:05:17.896 "product_name": "Malloc disk", 00:05:17.896 "block_size": 512, 00:05:17.896 "num_blocks": 16384, 00:05:17.896 "uuid": "d5d1d9db-55b4-4aa0-af86-8db4ad05f062", 00:05:17.896 "assigned_rate_limits": { 00:05:17.896 "rw_ios_per_sec": 0, 00:05:17.896 "rw_mbytes_per_sec": 0, 00:05:17.896 "r_mbytes_per_sec": 0, 00:05:17.896 "w_mbytes_per_sec": 0 00:05:17.896 }, 00:05:17.896 "claimed": true, 00:05:17.896 "claim_type": "exclusive_write", 00:05:17.896 "zoned": false, 00:05:17.896 "supported_io_types": { 00:05:17.896 "read": true, 00:05:17.896 "write": true, 00:05:17.896 "unmap": true, 00:05:17.896 "flush": true, 00:05:17.896 "reset": true, 00:05:17.896 "nvme_admin": false, 00:05:17.896 "nvme_io": false, 00:05:17.896 "nvme_io_md": false, 00:05:17.896 "write_zeroes": true, 00:05:17.896 "zcopy": true, 00:05:17.896 "get_zone_info": false, 00:05:17.896 "zone_management": false, 00:05:17.896 "zone_append": false, 00:05:17.896 "compare": false, 00:05:17.896 "compare_and_write": false, 00:05:17.896 "abort": true, 00:05:17.896 "seek_hole": false, 00:05:17.896 "seek_data": false, 00:05:17.896 "copy": true, 00:05:17.896 "nvme_iov_md": false 00:05:17.896 }, 00:05:17.896 "memory_domains": [ 00:05:17.896 { 00:05:17.896 "dma_device_id": "system", 00:05:17.896 "dma_device_type": 1 00:05:17.896 }, 00:05:17.896 { 00:05:17.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.896 "dma_device_type": 2 00:05:17.896 } 00:05:17.896 ], 00:05:17.896 "driver_specific": {} 00:05:17.896 }, 00:05:17.896 { 00:05:17.896 "name": "Passthru0", 00:05:17.896 "aliases": [ 00:05:17.896 "3fd68765-93d0-52d7-968f-e814eb0d2297" 00:05:17.896 ], 00:05:17.896 "product_name": "passthru", 00:05:17.896 "block_size": 512, 00:05:17.896 "num_blocks": 16384, 00:05:17.896 "uuid": "3fd68765-93d0-52d7-968f-e814eb0d2297", 00:05:17.896 "assigned_rate_limits": { 00:05:17.896 "rw_ios_per_sec": 0, 00:05:17.896 "rw_mbytes_per_sec": 0, 00:05:17.896 "r_mbytes_per_sec": 0, 00:05:17.896 "w_mbytes_per_sec": 0 00:05:17.896 }, 00:05:17.896 "claimed": false, 00:05:17.896 "zoned": false, 00:05:17.896 "supported_io_types": { 00:05:17.896 "read": true, 00:05:17.896 "write": true, 00:05:17.896 "unmap": true, 00:05:17.896 "flush": true, 00:05:17.896 "reset": true, 00:05:17.896 "nvme_admin": false, 00:05:17.896 "nvme_io": false, 00:05:17.896 "nvme_io_md": false, 00:05:17.896 "write_zeroes": true, 00:05:17.896 "zcopy": true, 00:05:17.896 "get_zone_info": false, 00:05:17.896 "zone_management": false, 00:05:17.896 "zone_append": false, 00:05:17.896 "compare": false, 00:05:17.896 "compare_and_write": false, 00:05:17.896 "abort": true, 00:05:17.896 "seek_hole": false, 00:05:17.896 "seek_data": false, 00:05:17.896 "copy": true, 00:05:17.896 "nvme_iov_md": false 00:05:17.896 }, 00:05:17.896 "memory_domains": [ 00:05:17.896 { 00:05:17.896 "dma_device_id": "system", 00:05:17.896 "dma_device_type": 1 00:05:17.896 }, 00:05:17.896 { 00:05:17.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.896 "dma_device_type": 2 00:05:17.897 } 00:05:17.897 ], 00:05:17.897 "driver_specific": { 00:05:17.897 "passthru": { 00:05:17.897 "name": "Passthru0", 00:05:17.897 "base_bdev_name": "Malloc2" 00:05:17.897 } 00:05:17.897 } 00:05:17.897 } 00:05:17.897 ]' 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.897 23:41:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.897 23:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.897 00:05:17.897 real 0m0.299s 00:05:17.897 user 0m0.193s 00:05:17.897 sys 0m0.037s 00:05:17.897 23:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:17.897 23:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.897 ************************************ 00:05:17.897 END TEST rpc_daemon_integrity 00:05:17.897 ************************************ 00:05:17.897 23:41:33 rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:17.897 23:41:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.897 23:41:33 rpc -- rpc/rpc.sh@84 -- # killprocess 229836 00:05:17.897 23:41:33 rpc -- common/autotest_common.sh@942 -- # '[' -z 229836 ']' 00:05:17.897 23:41:33 rpc -- common/autotest_common.sh@946 -- # kill -0 229836 00:05:17.897 23:41:33 rpc -- common/autotest_common.sh@947 -- # uname 00:05:17.897 23:41:33 rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:17.897 23:41:33 rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 229836 00:05:18.158 23:41:33 rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:18.158 23:41:33 rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:18.158 23:41:33 rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 229836' 00:05:18.158 killing process with pid 229836 00:05:18.158 23:41:33 rpc -- common/autotest_common.sh@961 -- # kill 229836 00:05:18.158 23:41:33 rpc -- common/autotest_common.sh@966 -- # wait 229836 00:05:18.158 00:05:18.158 real 0m2.479s 00:05:18.158 user 0m3.270s 00:05:18.158 sys 0m0.697s 00:05:18.158 23:41:33 rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:18.158 23:41:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.158 ************************************ 00:05:18.158 END TEST rpc 00:05:18.158 ************************************ 00:05:18.418 23:41:33 -- common/autotest_common.sh@1136 -- # return 0 00:05:18.418 23:41:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:18.418 23:41:33 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:18.418 23:41:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:18.418 23:41:33 -- common/autotest_common.sh@10 -- # set +x 00:05:18.418 ************************************ 00:05:18.418 START TEST skip_rpc 00:05:18.418 ************************************ 00:05:18.418 23:41:33 skip_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:18.418 * Looking for test storage... 00:05:18.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.418 23:41:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.418 23:41:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:18.418 23:41:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:18.418 23:41:33 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:18.418 23:41:33 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:18.418 23:41:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.418 ************************************ 00:05:18.418 START TEST skip_rpc 00:05:18.418 ************************************ 00:05:18.418 23:41:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1117 -- # test_skip_rpc 00:05:18.418 23:41:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=230398 00:05:18.418 23:41:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.418 23:41:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:18.418 23:41:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:18.418 [2024-07-15 23:41:33.602557] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:18.418 [2024-07-15 23:41:33.602603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230398 ] 00:05:18.679 [2024-07-15 23:41:33.668033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.679 [2024-07-15 23:41:33.733537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # local es=0 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # rpc_cmd spdk_get_version 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # es=1 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 230398 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@942 -- # '[' -z 230398 ']' 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # kill -0 230398 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # uname 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 230398 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 230398' 00:05:23.969 killing process with pid 230398 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # kill 230398 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # wait 230398 00:05:23.969 00:05:23.969 real 0m5.279s 00:05:23.969 user 0m5.083s 00:05:23.969 sys 0m0.230s 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:23.969 23:41:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.969 ************************************ 00:05:23.969 END TEST skip_rpc 00:05:23.969 ************************************ 00:05:23.969 23:41:38 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:23.969 23:41:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:23.969 23:41:38 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:23.969 23:41:38 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:23.969 23:41:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.969 ************************************ 00:05:23.969 START TEST skip_rpc_with_json 00:05:23.969 ************************************ 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_json 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=231615 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 231615 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@823 -- # '[' -z 231615 ']' 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:23.969 23:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.969 [2024-07-15 23:41:38.959998] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:23.969 [2024-07-15 23:41:38.960052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231615 ] 00:05:23.969 [2024-07-15 23:41:39.031967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.969 [2024-07-15 23:41:39.104297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # return 0 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.910 [2024-07-15 23:41:39.741971] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:24.910 request: 00:05:24.910 { 00:05:24.910 "trtype": "tcp", 00:05:24.910 "method": "nvmf_get_transports", 00:05:24.910 "req_id": 1 00:05:24.910 } 00:05:24.910 Got JSON-RPC error response 00:05:24.910 response: 00:05:24.910 { 00:05:24.910 "code": -19, 00:05:24.910 "message": "No such device" 00:05:24.910 } 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.910 [2024-07-15 23:41:39.754090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:24.910 23:41:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.910 { 00:05:24.910 "subsystems": [ 00:05:24.910 { 00:05:24.910 "subsystem": "vfio_user_target", 00:05:24.910 "config": null 00:05:24.910 }, 00:05:24.911 { 00:05:24.911 "subsystem": "keyring", 00:05:24.911 "config": [] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "iobuf", 00:05:24.911 "config": [ 00:05:24.911 { 00:05:24.911 "method": "iobuf_set_options", 00:05:24.911 "params": { 00:05:24.911 "small_pool_count": 8192, 00:05:24.911 "large_pool_count": 1024, 00:05:24.911 "small_bufsize": 8192, 00:05:24.911 "large_bufsize": 135168 00:05:24.911 } 00:05:24.911 } 00:05:24.911 ] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "sock", 00:05:24.911 "config": [ 00:05:24.911 { 00:05:24.911 "method": "sock_set_default_impl", 00:05:24.911 "params": { 00:05:24.911 "impl_name": "posix" 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "sock_impl_set_options", 00:05:24.911 "params": { 00:05:24.911 "impl_name": "ssl", 00:05:24.911 "recv_buf_size": 4096, 00:05:24.911 "send_buf_size": 4096, 00:05:24.911 "enable_recv_pipe": true, 00:05:24.911 "enable_quickack": false, 00:05:24.911 "enable_placement_id": 0, 00:05:24.911 "enable_zerocopy_send_server": true, 00:05:24.911 "enable_zerocopy_send_client": false, 00:05:24.911 "zerocopy_threshold": 0, 00:05:24.911 "tls_version": 0, 00:05:24.911 "enable_ktls": false 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "sock_impl_set_options", 00:05:24.911 "params": { 00:05:24.911 "impl_name": "posix", 00:05:24.911 "recv_buf_size": 2097152, 00:05:24.911 "send_buf_size": 2097152, 00:05:24.911 "enable_recv_pipe": true, 00:05:24.911 "enable_quickack": false, 00:05:24.911 "enable_placement_id": 0, 00:05:24.911 "enable_zerocopy_send_server": true, 00:05:24.911 "enable_zerocopy_send_client": false, 00:05:24.911 "zerocopy_threshold": 0, 00:05:24.911 "tls_version": 0, 00:05:24.911 "enable_ktls": false 00:05:24.911 } 00:05:24.911 } 00:05:24.911 ] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "vmd", 00:05:24.911 "config": [] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "accel", 00:05:24.911 "config": [ 00:05:24.911 { 00:05:24.911 "method": "accel_set_options", 00:05:24.911 "params": { 00:05:24.911 "small_cache_size": 128, 00:05:24.911 "large_cache_size": 16, 00:05:24.911 "task_count": 2048, 00:05:24.911 "sequence_count": 2048, 00:05:24.911 "buf_count": 2048 00:05:24.911 } 00:05:24.911 } 00:05:24.911 ] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "bdev", 00:05:24.911 "config": [ 00:05:24.911 { 00:05:24.911 "method": "bdev_set_options", 00:05:24.911 "params": { 00:05:24.911 "bdev_io_pool_size": 65535, 00:05:24.911 "bdev_io_cache_size": 256, 00:05:24.911 "bdev_auto_examine": true, 00:05:24.911 "iobuf_small_cache_size": 128, 00:05:24.911 "iobuf_large_cache_size": 16 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "bdev_raid_set_options", 00:05:24.911 "params": { 00:05:24.911 "process_window_size_kb": 1024 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "bdev_iscsi_set_options", 00:05:24.911 "params": { 00:05:24.911 "timeout_sec": 30 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "bdev_nvme_set_options", 00:05:24.911 "params": { 00:05:24.911 "action_on_timeout": "none", 00:05:24.911 "timeout_us": 0, 00:05:24.911 "timeout_admin_us": 0, 00:05:24.911 "keep_alive_timeout_ms": 10000, 00:05:24.911 "arbitration_burst": 0, 00:05:24.911 "low_priority_weight": 0, 00:05:24.911 "medium_priority_weight": 0, 00:05:24.911 "high_priority_weight": 0, 00:05:24.911 "nvme_adminq_poll_period_us": 10000, 00:05:24.911 "nvme_ioq_poll_period_us": 0, 00:05:24.911 "io_queue_requests": 0, 00:05:24.911 "delay_cmd_submit": true, 00:05:24.911 "transport_retry_count": 4, 00:05:24.911 "bdev_retry_count": 3, 00:05:24.911 "transport_ack_timeout": 0, 00:05:24.911 "ctrlr_loss_timeout_sec": 0, 00:05:24.911 "reconnect_delay_sec": 0, 00:05:24.911 "fast_io_fail_timeout_sec": 0, 00:05:24.911 "disable_auto_failback": false, 00:05:24.911 "generate_uuids": false, 00:05:24.911 "transport_tos": 0, 00:05:24.911 "nvme_error_stat": false, 00:05:24.911 "rdma_srq_size": 0, 00:05:24.911 "io_path_stat": false, 00:05:24.911 "allow_accel_sequence": false, 00:05:24.911 "rdma_max_cq_size": 0, 00:05:24.911 "rdma_cm_event_timeout_ms": 0, 00:05:24.911 "dhchap_digests": [ 00:05:24.911 "sha256", 00:05:24.911 "sha384", 00:05:24.911 "sha512" 00:05:24.911 ], 00:05:24.911 "dhchap_dhgroups": [ 00:05:24.911 "null", 00:05:24.911 "ffdhe2048", 00:05:24.911 "ffdhe3072", 00:05:24.911 "ffdhe4096", 00:05:24.911 "ffdhe6144", 00:05:24.911 "ffdhe8192" 00:05:24.911 ] 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "bdev_nvme_set_hotplug", 00:05:24.911 "params": { 00:05:24.911 "period_us": 100000, 00:05:24.911 "enable": false 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "bdev_wait_for_examine" 00:05:24.911 } 00:05:24.911 ] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "scsi", 00:05:24.911 "config": null 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "scheduler", 00:05:24.911 "config": [ 00:05:24.911 { 00:05:24.911 "method": "framework_set_scheduler", 00:05:24.911 "params": { 00:05:24.911 "name": "static" 00:05:24.911 } 00:05:24.911 } 00:05:24.911 ] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "vhost_scsi", 00:05:24.911 "config": [] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "vhost_blk", 00:05:24.911 "config": [] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "ublk", 00:05:24.911 "config": [] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "nbd", 00:05:24.911 "config": [] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "nvmf", 00:05:24.911 "config": [ 00:05:24.911 { 00:05:24.911 "method": "nvmf_set_config", 00:05:24.911 "params": { 00:05:24.911 "discovery_filter": "match_any", 00:05:24.911 "admin_cmd_passthru": { 00:05:24.911 "identify_ctrlr": false 00:05:24.911 } 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "nvmf_set_max_subsystems", 00:05:24.911 "params": { 00:05:24.911 "max_subsystems": 1024 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "nvmf_set_crdt", 00:05:24.911 "params": { 00:05:24.911 "crdt1": 0, 00:05:24.911 "crdt2": 0, 00:05:24.911 "crdt3": 0 00:05:24.911 } 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "method": "nvmf_create_transport", 00:05:24.911 "params": { 00:05:24.911 "trtype": "TCP", 00:05:24.911 "max_queue_depth": 128, 00:05:24.911 "max_io_qpairs_per_ctrlr": 127, 00:05:24.911 "in_capsule_data_size": 4096, 00:05:24.911 "max_io_size": 131072, 00:05:24.911 "io_unit_size": 131072, 00:05:24.911 "max_aq_depth": 128, 00:05:24.911 "num_shared_buffers": 511, 00:05:24.911 "buf_cache_size": 4294967295, 00:05:24.911 "dif_insert_or_strip": false, 00:05:24.911 "zcopy": false, 00:05:24.911 "c2h_success": true, 00:05:24.911 "sock_priority": 0, 00:05:24.911 "abort_timeout_sec": 1, 00:05:24.911 "ack_timeout": 0, 00:05:24.911 "data_wr_pool_size": 0 00:05:24.911 } 00:05:24.911 } 00:05:24.911 ] 00:05:24.911 }, 00:05:24.911 { 00:05:24.911 "subsystem": "iscsi", 00:05:24.911 "config": [ 00:05:24.911 { 00:05:24.911 "method": "iscsi_set_options", 00:05:24.911 "params": { 00:05:24.911 "node_base": "iqn.2016-06.io.spdk", 00:05:24.911 "max_sessions": 128, 00:05:24.911 "max_connections_per_session": 2, 00:05:24.911 "max_queue_depth": 64, 00:05:24.911 "default_time2wait": 2, 00:05:24.911 "default_time2retain": 20, 00:05:24.911 "first_burst_length": 8192, 00:05:24.911 "immediate_data": true, 00:05:24.911 "allow_duplicated_isid": false, 00:05:24.911 "error_recovery_level": 0, 00:05:24.911 "nop_timeout": 60, 00:05:24.911 "nop_in_interval": 30, 00:05:24.911 "disable_chap": false, 00:05:24.911 "require_chap": false, 00:05:24.911 "mutual_chap": false, 00:05:24.911 "chap_group": 0, 00:05:24.911 "max_large_datain_per_connection": 64, 00:05:24.911 "max_r2t_per_connection": 4, 00:05:24.911 "pdu_pool_size": 36864, 00:05:24.911 "immediate_data_pool_size": 16384, 00:05:24.911 "data_out_pool_size": 2048 00:05:24.911 } 00:05:24.911 } 00:05:24.911 ] 00:05:24.911 } 00:05:24.911 ] 00:05:24.911 } 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 231615 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 231615 ']' 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 231615 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 231615 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:24.911 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 231615' 00:05:24.912 killing process with pid 231615 00:05:24.912 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 231615 00:05:24.912 23:41:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 231615 00:05:25.172 23:41:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=231760 00:05:25.172 23:41:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:25.172 23:41:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 231760 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 231760 ']' 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 231760 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 231760 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 231760' 00:05:30.463 killing process with pid 231760 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 231760 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 231760 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.463 00:05:30.463 real 0m6.556s 00:05:30.463 user 0m6.453s 00:05:30.463 sys 0m0.523s 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.463 ************************************ 00:05:30.463 END TEST skip_rpc_with_json 00:05:30.463 ************************************ 00:05:30.463 23:41:45 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:30.463 23:41:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.463 23:41:45 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:30.463 23:41:45 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:30.463 23:41:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.463 ************************************ 00:05:30.463 START TEST skip_rpc_with_delay 00:05:30.463 ************************************ 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_delay 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # local es=0 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.463 [2024-07-15 23:41:45.595539] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.463 [2024-07-15 23:41:45.595628] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # es=1 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:30.463 00:05:30.463 real 0m0.076s 00:05:30.463 user 0m0.050s 00:05:30.463 sys 0m0.025s 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:30.463 23:41:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.463 ************************************ 00:05:30.463 END TEST skip_rpc_with_delay 00:05:30.463 ************************************ 00:05:30.463 23:41:45 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:30.463 23:41:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:30.463 23:41:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:30.725 23:41:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:30.725 23:41:45 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:30.725 23:41:45 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:30.725 23:41:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.725 ************************************ 00:05:30.725 START TEST exit_on_failed_rpc_init 00:05:30.725 ************************************ 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1117 -- # test_exit_on_failed_rpc_init 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=233043 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 233043 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@823 -- # '[' -z 233043 ']' 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:30.725 23:41:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.725 [2024-07-15 23:41:45.750146] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:30.725 [2024-07-15 23:41:45.750209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233043 ] 00:05:30.725 [2024-07-15 23:41:45.820754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.725 [2024-07-15 23:41:45.894985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # return 0 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # local es=0 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:31.669 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.669 [2024-07-15 23:41:46.553411] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:31.669 [2024-07-15 23:41:46.553463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233140 ] 00:05:31.669 [2024-07-15 23:41:46.634508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.669 [2024-07-15 23:41:46.699262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.669 [2024-07-15 23:41:46.699323] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:31.670 [2024-07-15 23:41:46.699332] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:31.670 [2024-07-15 23:41:46.699338] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # es=234 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # es=106 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # case "$es" in 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=1 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 233043 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@942 -- # '[' -z 233043 ']' 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # kill -0 233043 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # uname 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 233043 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 233043' 00:05:31.670 killing process with pid 233043 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@961 -- # kill 233043 00:05:31.670 23:41:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # wait 233043 00:05:31.931 00:05:31.931 real 0m1.326s 00:05:31.931 user 0m1.535s 00:05:31.931 sys 0m0.377s 00:05:31.931 23:41:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:31.931 23:41:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.931 ************************************ 00:05:31.931 END TEST exit_on_failed_rpc_init 00:05:31.931 ************************************ 00:05:31.931 23:41:47 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:31.931 23:41:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.931 00:05:31.931 real 0m13.644s 00:05:31.931 user 0m13.256s 00:05:31.931 sys 0m1.448s 00:05:31.931 23:41:47 skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:31.931 23:41:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.931 ************************************ 00:05:31.931 END TEST skip_rpc 00:05:31.931 ************************************ 00:05:31.931 23:41:47 -- common/autotest_common.sh@1136 -- # return 0 00:05:31.931 23:41:47 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.931 23:41:47 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:31.931 23:41:47 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:31.931 23:41:47 -- common/autotest_common.sh@10 -- # set +x 00:05:32.193 ************************************ 00:05:32.193 START TEST rpc_client 00:05:32.193 ************************************ 00:05:32.193 23:41:47 rpc_client -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:32.193 * Looking for test storage... 00:05:32.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:32.193 23:41:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:32.193 OK 00:05:32.193 23:41:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:32.193 00:05:32.193 real 0m0.124s 00:05:32.193 user 0m0.058s 00:05:32.193 sys 0m0.073s 00:05:32.193 23:41:47 rpc_client -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:32.193 23:41:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:32.193 ************************************ 00:05:32.193 END TEST rpc_client 00:05:32.193 ************************************ 00:05:32.193 23:41:47 -- common/autotest_common.sh@1136 -- # return 0 00:05:32.193 23:41:47 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.193 23:41:47 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:32.193 23:41:47 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:32.193 23:41:47 -- common/autotest_common.sh@10 -- # set +x 00:05:32.193 ************************************ 00:05:32.193 START TEST json_config 00:05:32.193 ************************************ 00:05:32.193 23:41:47 json_config -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.455 23:41:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.455 23:41:47 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.455 23:41:47 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.455 23:41:47 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.455 23:41:47 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.455 23:41:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.456 23:41:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.456 23:41:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.456 23:41:47 json_config -- paths/export.sh@5 -- # export PATH 00:05:32.456 23:41:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.456 23:41:47 json_config -- nvmf/common.sh@47 -- # : 0 00:05:32.456 23:41:47 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.456 23:41:47 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.456 23:41:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.456 23:41:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.456 23:41:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.456 23:41:47 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.456 23:41:47 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.456 23:41:47 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:32.456 INFO: JSON configuration test init 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.456 23:41:47 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:32.456 23:41:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:32.456 23:41:47 json_config -- json_config/common.sh@10 -- # shift 00:05:32.456 23:41:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.456 23:41:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.456 23:41:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.456 23:41:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.456 23:41:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.456 23:41:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=233567 00:05:32.456 23:41:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.456 Waiting for target to run... 00:05:32.456 23:41:47 json_config -- json_config/common.sh@25 -- # waitforlisten 233567 /var/tmp/spdk_tgt.sock 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@823 -- # '[' -z 233567 ']' 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:32.456 23:41:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:32.456 23:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.456 [2024-07-15 23:41:47.500152] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:32.456 [2024-07-15 23:41:47.500206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid233567 ] 00:05:32.717 [2024-07-15 23:41:47.839771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.717 [2024-07-15 23:41:47.901309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.289 23:41:48 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:33.289 23:41:48 json_config -- common/autotest_common.sh@856 -- # return 0 00:05:33.289 23:41:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:33.289 00:05:33.289 23:41:48 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:33.289 23:41:48 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:33.289 23:41:48 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:33.289 23:41:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.289 23:41:48 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:33.289 23:41:48 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:33.289 23:41:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.289 23:41:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.290 23:41:48 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:33.290 23:41:48 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:33.290 23:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.860 23:41:48 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:33.860 23:41:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:33.860 23:41:48 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:33.860 23:41:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.860 23:41:48 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:33.860 23:41:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.860 23:41:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:33.860 23:41:48 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:33.860 23:41:48 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:33.860 23:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.860 23:41:49 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:33.860 23:41:49 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:33.860 23:41:49 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:33.860 23:41:49 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:33.860 23:41:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.860 23:41:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:34.120 23:41:49 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:34.120 23:41:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.120 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:34.120 MallocForNvmf0 00:05:34.120 23:41:49 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.120 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.381 MallocForNvmf1 00:05:34.381 23:41:49 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.381 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.381 [2024-07-15 23:41:49.527285] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.381 23:41:49 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.381 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.640 23:41:49 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.640 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.900 23:41:49 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.901 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.901 23:41:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:34.901 23:41:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.161 [2024-07-15 23:41:50.177361] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:35.162 23:41:50 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:35.162 23:41:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.162 23:41:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.162 23:41:50 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:35.162 23:41:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.162 23:41:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.162 23:41:50 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:35.162 23:41:50 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.162 23:41:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.422 MallocBdevForConfigChangeCheck 00:05:35.422 23:41:50 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:35.422 23:41:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.422 23:41:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.422 23:41:50 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:35.422 23:41:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:35.682 23:41:50 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:35.682 INFO: shutting down applications... 00:05:35.682 23:41:50 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:35.682 23:41:50 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:35.682 23:41:50 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:35.682 23:41:50 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:36.252 Calling clear_iscsi_subsystem 00:05:36.252 Calling clear_nvmf_subsystem 00:05:36.252 Calling clear_nbd_subsystem 00:05:36.252 Calling clear_ublk_subsystem 00:05:36.252 Calling clear_vhost_blk_subsystem 00:05:36.252 Calling clear_vhost_scsi_subsystem 00:05:36.252 Calling clear_bdev_subsystem 00:05:36.252 23:41:51 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:36.252 23:41:51 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:36.252 23:41:51 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:36.252 23:41:51 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.252 23:41:51 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:36.252 23:41:51 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.512 23:41:51 json_config -- json_config/json_config.sh@345 -- # break 00:05:36.512 23:41:51 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:36.512 23:41:51 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:36.512 23:41:51 json_config -- json_config/common.sh@31 -- # local app=target 00:05:36.512 23:41:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.512 23:41:51 json_config -- json_config/common.sh@35 -- # [[ -n 233567 ]] 00:05:36.512 23:41:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 233567 00:05:36.512 23:41:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.512 23:41:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.512 23:41:51 json_config -- json_config/common.sh@41 -- # kill -0 233567 00:05:36.512 23:41:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.084 23:41:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.084 23:41:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.084 23:41:51 json_config -- json_config/common.sh@41 -- # kill -0 233567 00:05:37.084 23:41:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.084 23:41:51 json_config -- json_config/common.sh@43 -- # break 00:05:37.084 23:41:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.084 23:41:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.084 SPDK target shutdown done 00:05:37.084 23:41:51 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:37.084 INFO: relaunching applications... 00:05:37.084 23:41:51 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.084 23:41:51 json_config -- json_config/common.sh@9 -- # local app=target 00:05:37.084 23:41:51 json_config -- json_config/common.sh@10 -- # shift 00:05:37.084 23:41:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.084 23:41:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.084 23:41:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.084 23:41:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.084 23:41:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.085 23:41:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=234461 00:05:37.085 23:41:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.085 Waiting for target to run... 00:05:37.085 23:41:51 json_config -- json_config/common.sh@25 -- # waitforlisten 234461 /var/tmp/spdk_tgt.sock 00:05:37.085 23:41:51 json_config -- common/autotest_common.sh@823 -- # '[' -z 234461 ']' 00:05:37.085 23:41:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.085 23:41:51 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.085 23:41:51 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:37.085 23:41:51 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.085 23:41:51 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:37.085 23:41:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.085 [2024-07-15 23:41:52.051568] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:37.085 [2024-07-15 23:41:52.051629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid234461 ] 00:05:37.346 [2024-07-15 23:41:52.472753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.346 [2024-07-15 23:41:52.534520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.918 [2024-07-15 23:41:53.033495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.918 [2024-07-15 23:41:53.065852] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.918 23:41:53 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:37.918 23:41:53 json_config -- common/autotest_common.sh@856 -- # return 0 00:05:37.918 23:41:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.918 00:05:37.918 23:41:53 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:37.918 23:41:53 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.918 INFO: Checking if target configuration is the same... 00:05:37.918 23:41:53 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.178 23:41:53 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:38.178 23:41:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.178 + '[' 2 -ne 2 ']' 00:05:38.178 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.178 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.178 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.178 +++ basename /dev/fd/62 00:05:38.178 ++ mktemp /tmp/62.XXX 00:05:38.178 + tmp_file_1=/tmp/62.TrA 00:05:38.178 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.178 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.178 + tmp_file_2=/tmp/spdk_tgt_config.json.nmJ 00:05:38.178 + ret=0 00:05:38.178 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.438 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.438 + diff -u /tmp/62.TrA /tmp/spdk_tgt_config.json.nmJ 00:05:38.438 + echo 'INFO: JSON config files are the same' 00:05:38.438 INFO: JSON config files are the same 00:05:38.438 + rm /tmp/62.TrA /tmp/spdk_tgt_config.json.nmJ 00:05:38.438 + exit 0 00:05:38.438 23:41:53 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:38.439 23:41:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.439 INFO: changing configuration and checking if this can be detected... 00:05:38.439 23:41:53 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.439 23:41:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.699 23:41:53 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:38.699 23:41:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.699 23:41:53 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.699 + '[' 2 -ne 2 ']' 00:05:38.699 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.699 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.699 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.699 +++ basename /dev/fd/62 00:05:38.699 ++ mktemp /tmp/62.XXX 00:05:38.699 + tmp_file_1=/tmp/62.gHC 00:05:38.699 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.699 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.699 + tmp_file_2=/tmp/spdk_tgt_config.json.wpj 00:05:38.699 + ret=0 00:05:38.699 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.960 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.960 + diff -u /tmp/62.gHC /tmp/spdk_tgt_config.json.wpj 00:05:38.960 + ret=1 00:05:38.960 + echo '=== Start of file: /tmp/62.gHC ===' 00:05:38.960 + cat /tmp/62.gHC 00:05:38.960 + echo '=== End of file: /tmp/62.gHC ===' 00:05:38.960 + echo '' 00:05:38.960 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wpj ===' 00:05:38.960 + cat /tmp/spdk_tgt_config.json.wpj 00:05:38.960 + echo '=== End of file: /tmp/spdk_tgt_config.json.wpj ===' 00:05:38.960 + echo '' 00:05:38.960 + rm /tmp/62.gHC /tmp/spdk_tgt_config.json.wpj 00:05:38.960 + exit 1 00:05:38.960 23:41:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:38.960 INFO: configuration change detected. 00:05:38.960 23:41:53 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:38.960 23:41:53 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:38.960 23:41:53 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:38.960 23:41:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.960 23:41:53 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:38.960 23:41:53 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@317 -- # [[ -n 234461 ]] 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.960 23:41:54 json_config -- json_config/json_config.sh@323 -- # killprocess 234461 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@942 -- # '[' -z 234461 ']' 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@946 -- # kill -0 234461 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@947 -- # uname 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 234461 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@960 -- # echo 'killing process with pid 234461' 00:05:38.960 killing process with pid 234461 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@961 -- # kill 234461 00:05:38.960 23:41:54 json_config -- common/autotest_common.sh@966 -- # wait 234461 00:05:39.222 23:41:54 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.222 23:41:54 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:39.222 23:41:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.222 23:41:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.483 23:41:54 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:39.483 23:41:54 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:39.483 INFO: Success 00:05:39.483 00:05:39.483 real 0m7.106s 00:05:39.483 user 0m8.401s 00:05:39.483 sys 0m1.877s 00:05:39.483 23:41:54 json_config -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:39.483 23:41:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.483 ************************************ 00:05:39.483 END TEST json_config 00:05:39.483 ************************************ 00:05:39.483 23:41:54 -- common/autotest_common.sh@1136 -- # return 0 00:05:39.483 23:41:54 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:39.483 23:41:54 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:39.483 23:41:54 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:39.484 23:41:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.484 ************************************ 00:05:39.484 START TEST json_config_extra_key 00:05:39.484 ************************************ 00:05:39.484 23:41:54 json_config_extra_key -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.484 23:41:54 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.484 23:41:54 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.484 23:41:54 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.484 23:41:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.484 23:41:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.484 23:41:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.484 23:41:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:39.484 23:41:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:39.484 23:41:54 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:39.484 INFO: launching applications... 00:05:39.484 23:41:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=235164 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.484 Waiting for target to run... 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 235164 /var/tmp/spdk_tgt.sock 00:05:39.484 23:41:54 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:39.484 23:41:54 json_config_extra_key -- common/autotest_common.sh@823 -- # '[' -z 235164 ']' 00:05:39.484 23:41:54 json_config_extra_key -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.484 23:41:54 json_config_extra_key -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:39.484 23:41:54 json_config_extra_key -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.484 23:41:54 json_config_extra_key -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:39.484 23:41:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.484 [2024-07-15 23:41:54.673140] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:39.746 [2024-07-15 23:41:54.673207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235164 ] 00:05:39.746 [2024-07-15 23:41:54.922875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.007 [2024-07-15 23:41:54.973585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.268 23:41:55 json_config_extra_key -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:40.268 23:41:55 json_config_extra_key -- common/autotest_common.sh@856 -- # return 0 00:05:40.268 23:41:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:40.268 00:05:40.268 23:41:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:40.268 INFO: shutting down applications... 00:05:40.268 23:41:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:40.268 23:41:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:40.268 23:41:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.268 23:41:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 235164 ]] 00:05:40.268 23:41:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 235164 00:05:40.268 23:41:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.268 23:41:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.268 23:41:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 235164 00:05:40.268 23:41:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.838 23:41:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.838 23:41:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.838 23:41:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 235164 00:05:40.838 23:41:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.838 23:41:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:40.838 23:41:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.838 23:41:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.838 SPDK target shutdown done 00:05:40.838 23:41:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:40.838 Success 00:05:40.838 00:05:40.838 real 0m1.423s 00:05:40.838 user 0m1.092s 00:05:40.838 sys 0m0.347s 00:05:40.838 23:41:55 json_config_extra_key -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:40.838 23:41:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.838 ************************************ 00:05:40.838 END TEST json_config_extra_key 00:05:40.838 ************************************ 00:05:40.838 23:41:55 -- common/autotest_common.sh@1136 -- # return 0 00:05:40.838 23:41:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.838 23:41:55 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:40.838 23:41:55 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:40.838 23:41:55 -- common/autotest_common.sh@10 -- # set +x 00:05:40.838 ************************************ 00:05:40.838 START TEST alias_rpc 00:05:40.838 ************************************ 00:05:40.838 23:41:56 alias_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.098 * Looking for test storage... 00:05:41.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:41.098 23:41:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.098 23:41:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=235545 00:05:41.098 23:41:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 235545 00:05:41.098 23:41:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.098 23:41:56 alias_rpc -- common/autotest_common.sh@823 -- # '[' -z 235545 ']' 00:05:41.098 23:41:56 alias_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.098 23:41:56 alias_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:41.098 23:41:56 alias_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.098 23:41:56 alias_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:41.098 23:41:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.098 [2024-07-15 23:41:56.159766] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:41.098 [2024-07-15 23:41:56.159835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235545 ] 00:05:41.098 [2024-07-15 23:41:56.230445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.359 [2024-07-15 23:41:56.295562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.359 23:41:56 alias_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:41.359 23:41:56 alias_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:41.359 23:41:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:41.620 23:41:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 235545 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@942 -- # '[' -z 235545 ']' 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@946 -- # kill -0 235545 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@947 -- # uname 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 235545 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 235545' 00:05:41.620 killing process with pid 235545 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@961 -- # kill 235545 00:05:41.620 23:41:56 alias_rpc -- common/autotest_common.sh@966 -- # wait 235545 00:05:41.880 00:05:41.880 real 0m0.926s 00:05:41.880 user 0m0.999s 00:05:41.880 sys 0m0.356s 00:05:41.880 23:41:56 alias_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:41.880 23:41:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.880 ************************************ 00:05:41.880 END TEST alias_rpc 00:05:41.880 ************************************ 00:05:41.880 23:41:56 -- common/autotest_common.sh@1136 -- # return 0 00:05:41.880 23:41:56 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:41.881 23:41:56 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:41.881 23:41:56 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:41.881 23:41:56 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:41.881 23:41:56 -- common/autotest_common.sh@10 -- # set +x 00:05:41.881 ************************************ 00:05:41.881 START TEST spdkcli_tcp 00:05:41.881 ************************************ 00:05:41.881 23:41:56 spdkcli_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:42.141 * Looking for test storage... 00:05:42.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:42.141 23:41:57 spdkcli_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:42.141 23:41:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=235729 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 235729 00:05:42.141 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:42.141 23:41:57 spdkcli_tcp -- common/autotest_common.sh@823 -- # '[' -z 235729 ']' 00:05:42.141 23:41:57 spdkcli_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.141 23:41:57 spdkcli_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:42.141 23:41:57 spdkcli_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.141 23:41:57 spdkcli_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:42.141 23:41:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.141 [2024-07-15 23:41:57.161695] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:42.141 [2024-07-15 23:41:57.161769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235729 ] 00:05:42.141 [2024-07-15 23:41:57.233776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.141 [2024-07-15 23:41:57.309057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.141 [2024-07-15 23:41:57.309061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.084 23:41:57 spdkcli_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:43.084 23:41:57 spdkcli_tcp -- common/autotest_common.sh@856 -- # return 0 00:05:43.084 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=235943 00:05:43.084 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:43.084 23:41:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.084 [ 00:05:43.084 "bdev_malloc_delete", 00:05:43.084 "bdev_malloc_create", 00:05:43.084 "bdev_null_resize", 00:05:43.084 "bdev_null_delete", 00:05:43.084 "bdev_null_create", 00:05:43.084 "bdev_nvme_cuse_unregister", 00:05:43.084 "bdev_nvme_cuse_register", 00:05:43.084 "bdev_opal_new_user", 00:05:43.084 "bdev_opal_set_lock_state", 00:05:43.084 "bdev_opal_delete", 00:05:43.084 "bdev_opal_get_info", 00:05:43.084 "bdev_opal_create", 00:05:43.084 "bdev_nvme_opal_revert", 00:05:43.084 "bdev_nvme_opal_init", 00:05:43.084 "bdev_nvme_send_cmd", 00:05:43.084 "bdev_nvme_get_path_iostat", 00:05:43.084 "bdev_nvme_get_mdns_discovery_info", 00:05:43.084 "bdev_nvme_stop_mdns_discovery", 00:05:43.084 "bdev_nvme_start_mdns_discovery", 00:05:43.084 "bdev_nvme_set_multipath_policy", 00:05:43.084 "bdev_nvme_set_preferred_path", 00:05:43.084 "bdev_nvme_get_io_paths", 00:05:43.084 "bdev_nvme_remove_error_injection", 00:05:43.084 "bdev_nvme_add_error_injection", 00:05:43.084 "bdev_nvme_get_discovery_info", 00:05:43.084 "bdev_nvme_stop_discovery", 00:05:43.084 "bdev_nvme_start_discovery", 00:05:43.084 "bdev_nvme_get_controller_health_info", 00:05:43.084 "bdev_nvme_disable_controller", 00:05:43.084 "bdev_nvme_enable_controller", 00:05:43.084 "bdev_nvme_reset_controller", 00:05:43.084 "bdev_nvme_get_transport_statistics", 00:05:43.084 "bdev_nvme_apply_firmware", 00:05:43.084 "bdev_nvme_detach_controller", 00:05:43.084 "bdev_nvme_get_controllers", 00:05:43.084 "bdev_nvme_attach_controller", 00:05:43.084 "bdev_nvme_set_hotplug", 00:05:43.084 "bdev_nvme_set_options", 00:05:43.084 "bdev_passthru_delete", 00:05:43.084 "bdev_passthru_create", 00:05:43.084 "bdev_lvol_set_parent_bdev", 00:05:43.084 "bdev_lvol_set_parent", 00:05:43.084 "bdev_lvol_check_shallow_copy", 00:05:43.084 "bdev_lvol_start_shallow_copy", 00:05:43.084 "bdev_lvol_grow_lvstore", 00:05:43.084 "bdev_lvol_get_lvols", 00:05:43.084 "bdev_lvol_get_lvstores", 00:05:43.084 "bdev_lvol_delete", 00:05:43.084 "bdev_lvol_set_read_only", 00:05:43.084 "bdev_lvol_resize", 00:05:43.084 "bdev_lvol_decouple_parent", 00:05:43.084 "bdev_lvol_inflate", 00:05:43.084 "bdev_lvol_rename", 00:05:43.084 "bdev_lvol_clone_bdev", 00:05:43.084 "bdev_lvol_clone", 00:05:43.084 "bdev_lvol_snapshot", 00:05:43.084 "bdev_lvol_create", 00:05:43.084 "bdev_lvol_delete_lvstore", 00:05:43.084 "bdev_lvol_rename_lvstore", 00:05:43.084 "bdev_lvol_create_lvstore", 00:05:43.084 "bdev_raid_set_options", 00:05:43.084 "bdev_raid_remove_base_bdev", 00:05:43.084 "bdev_raid_add_base_bdev", 00:05:43.084 "bdev_raid_delete", 00:05:43.084 "bdev_raid_create", 00:05:43.084 "bdev_raid_get_bdevs", 00:05:43.084 "bdev_error_inject_error", 00:05:43.084 "bdev_error_delete", 00:05:43.084 "bdev_error_create", 00:05:43.084 "bdev_split_delete", 00:05:43.084 "bdev_split_create", 00:05:43.084 "bdev_delay_delete", 00:05:43.084 "bdev_delay_create", 00:05:43.084 "bdev_delay_update_latency", 00:05:43.084 "bdev_zone_block_delete", 00:05:43.084 "bdev_zone_block_create", 00:05:43.084 "blobfs_create", 00:05:43.084 "blobfs_detect", 00:05:43.084 "blobfs_set_cache_size", 00:05:43.084 "bdev_aio_delete", 00:05:43.084 "bdev_aio_rescan", 00:05:43.084 "bdev_aio_create", 00:05:43.084 "bdev_ftl_set_property", 00:05:43.084 "bdev_ftl_get_properties", 00:05:43.084 "bdev_ftl_get_stats", 00:05:43.084 "bdev_ftl_unmap", 00:05:43.084 "bdev_ftl_unload", 00:05:43.084 "bdev_ftl_delete", 00:05:43.084 "bdev_ftl_load", 00:05:43.084 "bdev_ftl_create", 00:05:43.084 "bdev_virtio_attach_controller", 00:05:43.084 "bdev_virtio_scsi_get_devices", 00:05:43.084 "bdev_virtio_detach_controller", 00:05:43.084 "bdev_virtio_blk_set_hotplug", 00:05:43.084 "bdev_iscsi_delete", 00:05:43.084 "bdev_iscsi_create", 00:05:43.084 "bdev_iscsi_set_options", 00:05:43.084 "accel_error_inject_error", 00:05:43.084 "ioat_scan_accel_module", 00:05:43.084 "dsa_scan_accel_module", 00:05:43.084 "iaa_scan_accel_module", 00:05:43.084 "vfu_virtio_create_scsi_endpoint", 00:05:43.084 "vfu_virtio_scsi_remove_target", 00:05:43.084 "vfu_virtio_scsi_add_target", 00:05:43.084 "vfu_virtio_create_blk_endpoint", 00:05:43.084 "vfu_virtio_delete_endpoint", 00:05:43.084 "keyring_file_remove_key", 00:05:43.084 "keyring_file_add_key", 00:05:43.084 "keyring_linux_set_options", 00:05:43.084 "iscsi_get_histogram", 00:05:43.084 "iscsi_enable_histogram", 00:05:43.084 "iscsi_set_options", 00:05:43.084 "iscsi_get_auth_groups", 00:05:43.084 "iscsi_auth_group_remove_secret", 00:05:43.084 "iscsi_auth_group_add_secret", 00:05:43.084 "iscsi_delete_auth_group", 00:05:43.084 "iscsi_create_auth_group", 00:05:43.084 "iscsi_set_discovery_auth", 00:05:43.084 "iscsi_get_options", 00:05:43.084 "iscsi_target_node_request_logout", 00:05:43.084 "iscsi_target_node_set_redirect", 00:05:43.084 "iscsi_target_node_set_auth", 00:05:43.084 "iscsi_target_node_add_lun", 00:05:43.084 "iscsi_get_stats", 00:05:43.084 "iscsi_get_connections", 00:05:43.084 "iscsi_portal_group_set_auth", 00:05:43.084 "iscsi_start_portal_group", 00:05:43.084 "iscsi_delete_portal_group", 00:05:43.084 "iscsi_create_portal_group", 00:05:43.084 "iscsi_get_portal_groups", 00:05:43.084 "iscsi_delete_target_node", 00:05:43.084 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.084 "iscsi_target_node_add_pg_ig_maps", 00:05:43.084 "iscsi_create_target_node", 00:05:43.084 "iscsi_get_target_nodes", 00:05:43.084 "iscsi_delete_initiator_group", 00:05:43.084 "iscsi_initiator_group_remove_initiators", 00:05:43.084 "iscsi_initiator_group_add_initiators", 00:05:43.084 "iscsi_create_initiator_group", 00:05:43.084 "iscsi_get_initiator_groups", 00:05:43.084 "nvmf_set_crdt", 00:05:43.084 "nvmf_set_config", 00:05:43.084 "nvmf_set_max_subsystems", 00:05:43.084 "nvmf_stop_mdns_prr", 00:05:43.084 "nvmf_publish_mdns_prr", 00:05:43.084 "nvmf_subsystem_get_listeners", 00:05:43.084 "nvmf_subsystem_get_qpairs", 00:05:43.084 "nvmf_subsystem_get_controllers", 00:05:43.084 "nvmf_get_stats", 00:05:43.084 "nvmf_get_transports", 00:05:43.084 "nvmf_create_transport", 00:05:43.084 "nvmf_get_targets", 00:05:43.084 "nvmf_delete_target", 00:05:43.084 "nvmf_create_target", 00:05:43.084 "nvmf_subsystem_allow_any_host", 00:05:43.084 "nvmf_subsystem_remove_host", 00:05:43.084 "nvmf_subsystem_add_host", 00:05:43.084 "nvmf_ns_remove_host", 00:05:43.084 "nvmf_ns_add_host", 00:05:43.084 "nvmf_subsystem_remove_ns", 00:05:43.084 "nvmf_subsystem_add_ns", 00:05:43.084 "nvmf_subsystem_listener_set_ana_state", 00:05:43.084 "nvmf_discovery_get_referrals", 00:05:43.084 "nvmf_discovery_remove_referral", 00:05:43.084 "nvmf_discovery_add_referral", 00:05:43.084 "nvmf_subsystem_remove_listener", 00:05:43.084 "nvmf_subsystem_add_listener", 00:05:43.084 "nvmf_delete_subsystem", 00:05:43.084 "nvmf_create_subsystem", 00:05:43.084 "nvmf_get_subsystems", 00:05:43.084 "env_dpdk_get_mem_stats", 00:05:43.084 "nbd_get_disks", 00:05:43.084 "nbd_stop_disk", 00:05:43.084 "nbd_start_disk", 00:05:43.084 "ublk_recover_disk", 00:05:43.084 "ublk_get_disks", 00:05:43.084 "ublk_stop_disk", 00:05:43.084 "ublk_start_disk", 00:05:43.084 "ublk_destroy_target", 00:05:43.084 "ublk_create_target", 00:05:43.084 "virtio_blk_create_transport", 00:05:43.084 "virtio_blk_get_transports", 00:05:43.085 "vhost_controller_set_coalescing", 00:05:43.085 "vhost_get_controllers", 00:05:43.085 "vhost_delete_controller", 00:05:43.085 "vhost_create_blk_controller", 00:05:43.085 "vhost_scsi_controller_remove_target", 00:05:43.085 "vhost_scsi_controller_add_target", 00:05:43.085 "vhost_start_scsi_controller", 00:05:43.085 "vhost_create_scsi_controller", 00:05:43.085 "thread_set_cpumask", 00:05:43.085 "framework_get_governor", 00:05:43.085 "framework_get_scheduler", 00:05:43.085 "framework_set_scheduler", 00:05:43.085 "framework_get_reactors", 00:05:43.085 "thread_get_io_channels", 00:05:43.085 "thread_get_pollers", 00:05:43.085 "thread_get_stats", 00:05:43.085 "framework_monitor_context_switch", 00:05:43.085 "spdk_kill_instance", 00:05:43.085 "log_enable_timestamps", 00:05:43.085 "log_get_flags", 00:05:43.085 "log_clear_flag", 00:05:43.085 "log_set_flag", 00:05:43.085 "log_get_level", 00:05:43.085 "log_set_level", 00:05:43.085 "log_get_print_level", 00:05:43.085 "log_set_print_level", 00:05:43.085 "framework_enable_cpumask_locks", 00:05:43.085 "framework_disable_cpumask_locks", 00:05:43.085 "framework_wait_init", 00:05:43.085 "framework_start_init", 00:05:43.085 "scsi_get_devices", 00:05:43.085 "bdev_get_histogram", 00:05:43.085 "bdev_enable_histogram", 00:05:43.085 "bdev_set_qos_limit", 00:05:43.085 "bdev_set_qd_sampling_period", 00:05:43.085 "bdev_get_bdevs", 00:05:43.085 "bdev_reset_iostat", 00:05:43.085 "bdev_get_iostat", 00:05:43.085 "bdev_examine", 00:05:43.085 "bdev_wait_for_examine", 00:05:43.085 "bdev_set_options", 00:05:43.085 "notify_get_notifications", 00:05:43.085 "notify_get_types", 00:05:43.085 "accel_get_stats", 00:05:43.085 "accel_set_options", 00:05:43.085 "accel_set_driver", 00:05:43.085 "accel_crypto_key_destroy", 00:05:43.085 "accel_crypto_keys_get", 00:05:43.085 "accel_crypto_key_create", 00:05:43.085 "accel_assign_opc", 00:05:43.085 "accel_get_module_info", 00:05:43.085 "accel_get_opc_assignments", 00:05:43.085 "vmd_rescan", 00:05:43.085 "vmd_remove_device", 00:05:43.085 "vmd_enable", 00:05:43.085 "sock_get_default_impl", 00:05:43.085 "sock_set_default_impl", 00:05:43.085 "sock_impl_set_options", 00:05:43.085 "sock_impl_get_options", 00:05:43.085 "iobuf_get_stats", 00:05:43.085 "iobuf_set_options", 00:05:43.085 "keyring_get_keys", 00:05:43.085 "framework_get_pci_devices", 00:05:43.085 "framework_get_config", 00:05:43.085 "framework_get_subsystems", 00:05:43.085 "vfu_tgt_set_base_path", 00:05:43.085 "trace_get_info", 00:05:43.085 "trace_get_tpoint_group_mask", 00:05:43.085 "trace_disable_tpoint_group", 00:05:43.085 "trace_enable_tpoint_group", 00:05:43.085 "trace_clear_tpoint_mask", 00:05:43.085 "trace_set_tpoint_mask", 00:05:43.085 "spdk_get_version", 00:05:43.085 "rpc_get_methods" 00:05:43.085 ] 00:05:43.085 23:41:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.085 23:41:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:43.085 23:41:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 235729 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@942 -- # '[' -z 235729 ']' 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@946 -- # kill -0 235729 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@947 -- # uname 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 235729 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 235729' 00:05:43.085 killing process with pid 235729 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@961 -- # kill 235729 00:05:43.085 23:41:58 spdkcli_tcp -- common/autotest_common.sh@966 -- # wait 235729 00:05:43.345 00:05:43.345 real 0m1.410s 00:05:43.345 user 0m2.590s 00:05:43.345 sys 0m0.426s 00:05:43.345 23:41:58 spdkcli_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:43.345 23:41:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.345 ************************************ 00:05:43.345 END TEST spdkcli_tcp 00:05:43.345 ************************************ 00:05:43.345 23:41:58 -- common/autotest_common.sh@1136 -- # return 0 00:05:43.345 23:41:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.345 23:41:58 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:43.345 23:41:58 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:43.345 23:41:58 -- common/autotest_common.sh@10 -- # set +x 00:05:43.345 ************************************ 00:05:43.345 START TEST dpdk_mem_utility 00:05:43.345 ************************************ 00:05:43.345 23:41:58 dpdk_mem_utility -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.606 * Looking for test storage... 00:05:43.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:43.606 23:41:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:43.606 23:41:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=236034 00:05:43.606 23:41:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 236034 00:05:43.606 23:41:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.606 23:41:58 dpdk_mem_utility -- common/autotest_common.sh@823 -- # '[' -z 236034 ']' 00:05:43.606 23:41:58 dpdk_mem_utility -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.606 23:41:58 dpdk_mem_utility -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:43.606 23:41:58 dpdk_mem_utility -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.606 23:41:58 dpdk_mem_utility -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:43.606 23:41:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.606 [2024-07-15 23:41:58.643798] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:43.606 [2024-07-15 23:41:58.643865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236034 ] 00:05:43.606 [2024-07-15 23:41:58.717479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.606 [2024-07-15 23:41:58.794468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.549 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:44.549 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@856 -- # return 0 00:05:44.549 23:41:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.549 23:41:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.549 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:44.549 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.549 { 00:05:44.549 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.549 } 00:05:44.549 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:44.549 23:41:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.549 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:44.549 1 heaps totaling size 814.000000 MiB 00:05:44.549 size: 814.000000 MiB heap id: 0 00:05:44.549 end heaps---------- 00:05:44.549 8 mempools totaling size 598.116089 MiB 00:05:44.549 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.549 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.549 size: 84.521057 MiB name: bdev_io_236034 00:05:44.549 size: 51.011292 MiB name: evtpool_236034 00:05:44.549 size: 50.003479 MiB name: msgpool_236034 00:05:44.549 size: 21.763794 MiB name: PDU_Pool 00:05:44.549 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.549 size: 0.026123 MiB name: Session_Pool 00:05:44.549 end mempools------- 00:05:44.549 6 memzones totaling size 4.142822 MiB 00:05:44.549 size: 1.000366 MiB name: RG_ring_0_236034 00:05:44.549 size: 1.000366 MiB name: RG_ring_1_236034 00:05:44.549 size: 1.000366 MiB name: RG_ring_4_236034 00:05:44.549 size: 1.000366 MiB name: RG_ring_5_236034 00:05:44.549 size: 0.125366 MiB name: RG_ring_2_236034 00:05:44.549 size: 0.015991 MiB name: RG_ring_3_236034 00:05:44.549 end memzones------- 00:05:44.549 23:41:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.549 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:44.549 list of free elements. size: 12.519348 MiB 00:05:44.549 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:44.549 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:44.549 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:44.549 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:44.549 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:44.549 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:44.549 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:44.549 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:44.549 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:44.549 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:44.549 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:44.549 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:44.549 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:44.549 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:44.549 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:44.549 list of standard malloc elements. size: 199.218079 MiB 00:05:44.549 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:44.549 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:44.549 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:44.549 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:44.549 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:44.549 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.549 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:44.549 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.549 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:44.549 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.549 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:44.549 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:44.549 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:44.549 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:44.549 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:44.549 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:44.549 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:44.549 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:44.549 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:44.549 list of memzone associated elements. size: 602.262573 MiB 00:05:44.549 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:44.549 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.549 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:44.549 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.549 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:44.549 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_236034_0 00:05:44.549 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:44.549 associated memzone info: size: 48.002930 MiB name: MP_evtpool_236034_0 00:05:44.549 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:44.549 associated memzone info: size: 48.002930 MiB name: MP_msgpool_236034_0 00:05:44.549 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:44.549 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.549 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:44.549 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.549 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:44.549 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_236034 00:05:44.549 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:44.549 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_236034 00:05:44.549 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.549 associated memzone info: size: 1.007996 MiB name: MP_evtpool_236034 00:05:44.549 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:44.549 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.549 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:44.549 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.549 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:44.549 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.549 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:44.549 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.549 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:44.549 associated memzone info: size: 1.000366 MiB name: RG_ring_0_236034 00:05:44.549 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:44.549 associated memzone info: size: 1.000366 MiB name: RG_ring_1_236034 00:05:44.549 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:44.549 associated memzone info: size: 1.000366 MiB name: RG_ring_4_236034 00:05:44.549 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:44.549 associated memzone info: size: 1.000366 MiB name: RG_ring_5_236034 00:05:44.549 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:44.549 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_236034 00:05:44.549 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:44.549 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.549 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:44.549 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.549 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:44.549 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.549 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:44.549 associated memzone info: size: 0.125366 MiB name: RG_ring_2_236034 00:05:44.549 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:44.549 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.549 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:44.549 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.549 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:44.549 associated memzone info: size: 0.015991 MiB name: RG_ring_3_236034 00:05:44.549 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:44.549 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.549 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:44.549 associated memzone info: size: 0.000183 MiB name: MP_msgpool_236034 00:05:44.549 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:44.549 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_236034 00:05:44.549 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:44.549 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.549 23:41:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.550 23:41:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 236034 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@942 -- # '[' -z 236034 ']' 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@946 -- # kill -0 236034 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@947 -- # uname 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 236034 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # echo 'killing process with pid 236034' 00:05:44.550 killing process with pid 236034 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@961 -- # kill 236034 00:05:44.550 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@966 -- # wait 236034 00:05:44.811 00:05:44.811 real 0m1.275s 00:05:44.811 user 0m1.317s 00:05:44.811 sys 0m0.388s 00:05:44.811 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:44.811 23:41:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.811 ************************************ 00:05:44.811 END TEST dpdk_mem_utility 00:05:44.811 ************************************ 00:05:44.811 23:41:59 -- common/autotest_common.sh@1136 -- # return 0 00:05:44.811 23:41:59 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.811 23:41:59 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:44.811 23:41:59 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:44.811 23:41:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.811 ************************************ 00:05:44.811 START TEST event 00:05:44.811 ************************************ 00:05:44.811 23:41:59 event -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:44.811 * Looking for test storage... 00:05:44.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:44.811 23:41:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:44.811 23:41:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:44.811 23:41:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.811 23:41:59 event -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:44.811 23:41:59 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:44.811 23:41:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.811 ************************************ 00:05:44.811 START TEST event_perf 00:05:44.811 ************************************ 00:05:44.811 23:41:59 event.event_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.811 Running I/O for 1 seconds...[2024-07-15 23:41:59.993781] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:44.811 [2024-07-15 23:41:59.993874] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236404 ] 00:05:45.071 [2024-07-15 23:42:00.068789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.071 [2024-07-15 23:42:00.137798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.071 [2024-07-15 23:42:00.137913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.071 [2024-07-15 23:42:00.138401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.071 [2024-07-15 23:42:00.138503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.014 Running I/O for 1 seconds... 00:05:46.014 lcore 0: 175797 00:05:46.014 lcore 1: 175800 00:05:46.014 lcore 2: 175796 00:05:46.014 lcore 3: 175799 00:05:46.014 done. 00:05:46.014 00:05:46.014 real 0m1.219s 00:05:46.014 user 0m4.132s 00:05:46.014 sys 0m0.079s 00:05:46.014 23:42:01 event.event_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:46.014 23:42:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.014 ************************************ 00:05:46.014 END TEST event_perf 00:05:46.014 ************************************ 00:05:46.274 23:42:01 event -- common/autotest_common.sh@1136 -- # return 0 00:05:46.274 23:42:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.274 23:42:01 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:46.274 23:42:01 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:46.274 23:42:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.274 ************************************ 00:05:46.274 START TEST event_reactor 00:05:46.274 ************************************ 00:05:46.274 23:42:01 event.event_reactor -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.274 [2024-07-15 23:42:01.292099] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:46.274 [2024-07-15 23:42:01.292201] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid236835 ] 00:05:46.274 [2024-07-15 23:42:01.361621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.274 [2024-07-15 23:42:01.426423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.657 test_start 00:05:47.657 oneshot 00:05:47.657 tick 100 00:05:47.657 tick 100 00:05:47.657 tick 250 00:05:47.657 tick 100 00:05:47.657 tick 100 00:05:47.657 tick 100 00:05:47.657 tick 250 00:05:47.657 tick 500 00:05:47.657 tick 100 00:05:47.657 tick 100 00:05:47.657 tick 250 00:05:47.657 tick 100 00:05:47.657 tick 100 00:05:47.657 test_end 00:05:47.657 00:05:47.657 real 0m1.209s 00:05:47.657 user 0m1.133s 00:05:47.657 sys 0m0.070s 00:05:47.657 23:42:02 event.event_reactor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:47.657 23:42:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 ************************************ 00:05:47.657 END TEST event_reactor 00:05:47.657 ************************************ 00:05:47.657 23:42:02 event -- common/autotest_common.sh@1136 -- # return 0 00:05:47.657 23:42:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.657 23:42:02 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:47.657 23:42:02 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:47.657 23:42:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 ************************************ 00:05:47.657 START TEST event_reactor_perf 00:05:47.657 ************************************ 00:05:47.657 23:42:02 event.event_reactor_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.657 [2024-07-15 23:42:02.570828] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:47.657 [2024-07-15 23:42:02.570922] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237215 ] 00:05:47.657 [2024-07-15 23:42:02.641188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.657 [2024-07-15 23:42:02.707561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.599 test_start 00:05:48.599 test_end 00:05:48.599 Performance: 367720 events per second 00:05:48.599 00:05:48.599 real 0m1.210s 00:05:48.599 user 0m1.128s 00:05:48.599 sys 0m0.078s 00:05:48.599 23:42:03 event.event_reactor_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:48.599 23:42:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.599 ************************************ 00:05:48.599 END TEST event_reactor_perf 00:05:48.599 ************************************ 00:05:48.860 23:42:03 event -- common/autotest_common.sh@1136 -- # return 0 00:05:48.860 23:42:03 event -- event/event.sh@49 -- # uname -s 00:05:48.860 23:42:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.860 23:42:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.860 23:42:03 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:48.860 23:42:03 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:48.860 23:42:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.860 ************************************ 00:05:48.860 START TEST event_scheduler 00:05:48.860 ************************************ 00:05:48.860 23:42:03 event.event_scheduler -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:48.860 * Looking for test storage... 00:05:48.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:48.860 23:42:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.860 23:42:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=237439 00:05:48.860 23:42:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.861 23:42:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.861 23:42:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 237439 00:05:48.861 23:42:03 event.event_scheduler -- common/autotest_common.sh@823 -- # '[' -z 237439 ']' 00:05:48.861 23:42:03 event.event_scheduler -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.861 23:42:03 event.event_scheduler -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:48.861 23:42:03 event.event_scheduler -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.861 23:42:03 event.event_scheduler -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:48.861 23:42:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.861 [2024-07-15 23:42:03.982077] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:48.861 [2024-07-15 23:42:03.982142] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid237439 ] 00:05:48.861 [2024-07-15 23:42:04.040127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.121 [2024-07-15 23:42:04.097134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.121 [2024-07-15 23:42:04.097274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.121 [2024-07-15 23:42:04.097364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.121 [2024-07-15 23:42:04.097365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@856 -- # return 0 00:05:49.692 23:42:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.692 [2024-07-15 23:42:04.763524] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:49.692 [2024-07-15 23:42:04.763537] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.692 [2024-07-15 23:42:04.763545] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.692 [2024-07-15 23:42:04.763549] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.692 [2024-07-15 23:42:04.763553] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.692 23:42:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.692 [2024-07-15 23:42:04.818109] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.692 23:42:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:49.692 23:42:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.692 ************************************ 00:05:49.692 START TEST scheduler_create_thread 00:05:49.692 ************************************ 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1117 -- # scheduler_create_thread 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.692 2 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.692 3 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.692 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 4 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 5 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 6 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 7 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 8 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 9 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 10 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:49.952 23:42:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.336 23:42:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:51.336 23:42:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:51.336 23:42:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:51.336 23:42:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:51.336 23:42:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.304 23:42:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:52.304 00:05:52.304 real 0m2.619s 00:05:52.304 user 0m0.015s 00:05:52.304 sys 0m0.001s 00:05:52.304 23:42:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:52.304 23:42:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.304 ************************************ 00:05:52.304 END TEST scheduler_create_thread 00:05:52.304 ************************************ 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@1136 -- # return 0 00:05:52.564 23:42:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:52.564 23:42:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 237439 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@942 -- # '[' -z 237439 ']' 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@946 -- # kill -0 237439 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@947 -- # uname 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 237439 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@960 -- # echo 'killing process with pid 237439' 00:05:52.564 killing process with pid 237439 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@961 -- # kill 237439 00:05:52.564 23:42:07 event.event_scheduler -- common/autotest_common.sh@966 -- # wait 237439 00:05:52.824 [2024-07-15 23:42:07.951517] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:53.085 00:05:53.085 real 0m4.242s 00:05:53.085 user 0m7.990s 00:05:53.085 sys 0m0.355s 00:05:53.085 23:42:08 event.event_scheduler -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:53.085 23:42:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.085 ************************************ 00:05:53.085 END TEST event_scheduler 00:05:53.085 ************************************ 00:05:53.085 23:42:08 event -- common/autotest_common.sh@1136 -- # return 0 00:05:53.085 23:42:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:53.085 23:42:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:53.085 23:42:08 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:53.085 23:42:08 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:53.085 23:42:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.085 ************************************ 00:05:53.085 START TEST app_repeat 00:05:53.085 ************************************ 00:05:53.085 23:42:08 event.app_repeat -- common/autotest_common.sh@1117 -- # app_repeat_test 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=238329 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 238329' 00:05:53.085 Process app_repeat pid: 238329 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:53.085 spdk_app_start Round 0 00:05:53.085 23:42:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 238329 /var/tmp/spdk-nbd.sock 00:05:53.085 23:42:08 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 238329 ']' 00:05:53.085 23:42:08 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.085 23:42:08 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:53.085 23:42:08 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.085 23:42:08 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:53.085 23:42:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.085 [2024-07-15 23:42:08.195334] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:05:53.085 [2024-07-15 23:42:08.195393] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid238329 ] 00:05:53.085 [2024-07-15 23:42:08.265150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.345 [2024-07-15 23:42:08.331646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.345 [2024-07-15 23:42:08.331648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.914 23:42:08 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:53.914 23:42:08 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:05:53.914 23:42:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.173 Malloc0 00:05:54.173 23:42:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.173 Malloc1 00:05:54.173 23:42:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.173 23:42:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.173 23:42:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.173 23:42:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.174 23:42:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.433 /dev/nbd0 00:05:54.433 23:42:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.433 23:42:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.433 1+0 records in 00:05:54.433 1+0 records out 00:05:54.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271796 s, 15.1 MB/s 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:54.433 23:42:09 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:54.433 23:42:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.433 23:42:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.433 23:42:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.693 /dev/nbd1 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.693 1+0 records in 00:05:54.693 1+0 records out 00:05:54.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285224 s, 14.4 MB/s 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:54.693 23:42:09 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.693 { 00:05:54.693 "nbd_device": "/dev/nbd0", 00:05:54.693 "bdev_name": "Malloc0" 00:05:54.693 }, 00:05:54.693 { 00:05:54.693 "nbd_device": "/dev/nbd1", 00:05:54.693 "bdev_name": "Malloc1" 00:05:54.693 } 00:05:54.693 ]' 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.693 { 00:05:54.693 "nbd_device": "/dev/nbd0", 00:05:54.693 "bdev_name": "Malloc0" 00:05:54.693 }, 00:05:54.693 { 00:05:54.693 "nbd_device": "/dev/nbd1", 00:05:54.693 "bdev_name": "Malloc1" 00:05:54.693 } 00:05:54.693 ]' 00:05:54.693 23:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.954 /dev/nbd1' 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.954 /dev/nbd1' 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.954 256+0 records in 00:05:54.954 256+0 records out 00:05:54.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116978 s, 89.6 MB/s 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.954 256+0 records in 00:05:54.954 256+0 records out 00:05:54.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155811 s, 67.3 MB/s 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.954 256+0 records in 00:05:54.954 256+0 records out 00:05:54.954 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169648 s, 61.8 MB/s 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.954 23:42:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.954 23:42:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.214 23:42:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.214 23:42:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.214 23:42:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.215 23:42:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.215 23:42:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.215 23:42:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.215 23:42:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.215 23:42:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.215 23:42:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.215 23:42:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.215 23:42:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.476 23:42:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.476 23:42:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.737 23:42:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.737 [2024-07-15 23:42:10.813203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.737 [2024-07-15 23:42:10.877317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.737 [2024-07-15 23:42:10.877320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.737 [2024-07-15 23:42:10.908812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.737 [2024-07-15 23:42:10.908847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.054 23:42:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.054 23:42:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.054 spdk_app_start Round 1 00:05:59.054 23:42:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 238329 /var/tmp/spdk-nbd.sock 00:05:59.054 23:42:13 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 238329 ']' 00:05:59.054 23:42:13 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.054 23:42:13 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:59.054 23:42:13 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.054 23:42:13 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:59.054 23:42:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.054 23:42:13 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:59.054 23:42:13 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:05:59.054 23:42:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.054 Malloc0 00:05:59.054 23:42:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.054 Malloc1 00:05:59.055 23:42:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.055 23:42:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.315 /dev/nbd0 00:05:59.315 23:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.315 23:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.315 1+0 records in 00:05:59.315 1+0 records out 00:05:59.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252889 s, 16.2 MB/s 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:59.315 23:42:14 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:59.315 23:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.315 23:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.315 23:42:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.575 /dev/nbd1 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.575 1+0 records in 00:05:59.575 1+0 records out 00:05:59.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272097 s, 15.1 MB/s 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:59.575 23:42:14 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.575 { 00:05:59.575 "nbd_device": "/dev/nbd0", 00:05:59.575 "bdev_name": "Malloc0" 00:05:59.575 }, 00:05:59.575 { 00:05:59.575 "nbd_device": "/dev/nbd1", 00:05:59.575 "bdev_name": "Malloc1" 00:05:59.575 } 00:05:59.575 ]' 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.575 { 00:05:59.575 "nbd_device": "/dev/nbd0", 00:05:59.575 "bdev_name": "Malloc0" 00:05:59.575 }, 00:05:59.575 { 00:05:59.575 "nbd_device": "/dev/nbd1", 00:05:59.575 "bdev_name": "Malloc1" 00:05:59.575 } 00:05:59.575 ]' 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.575 /dev/nbd1' 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.575 /dev/nbd1' 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.575 23:42:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.835 256+0 records in 00:05:59.835 256+0 records out 00:05:59.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124557 s, 84.2 MB/s 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.835 256+0 records in 00:05:59.835 256+0 records out 00:05:59.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161367 s, 65.0 MB/s 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.835 256+0 records in 00:05:59.835 256+0 records out 00:05:59.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168971 s, 62.1 MB/s 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.835 23:42:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.836 23:42:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.836 23:42:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.096 23:42:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.356 23:42:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.356 23:42:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.356 23:42:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.616 [2024-07-15 23:42:15.672382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.616 [2024-07-15 23:42:15.737284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.616 [2024-07-15 23:42:15.737287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.616 [2024-07-15 23:42:15.769556] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.616 [2024-07-15 23:42:15.769592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.915 23:42:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.915 23:42:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:03.915 spdk_app_start Round 2 00:06:03.915 23:42:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 238329 /var/tmp/spdk-nbd.sock 00:06:03.915 23:42:18 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 238329 ']' 00:06:03.915 23:42:18 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.915 23:42:18 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:03.915 23:42:18 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.915 23:42:18 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:03.915 23:42:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.915 23:42:18 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:03.915 23:42:18 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:06:03.915 23:42:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.915 Malloc0 00:06:03.915 23:42:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.915 Malloc1 00:06:03.915 23:42:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.915 23:42:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.176 /dev/nbd0 00:06:04.176 23:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.176 23:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.176 1+0 records in 00:06:04.176 1+0 records out 00:06:04.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274473 s, 14.9 MB/s 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:06:04.176 23:42:19 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:06:04.176 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.176 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.176 23:42:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.436 /dev/nbd1 00:06:04.436 23:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.436 23:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.436 23:42:19 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:06:04.436 23:42:19 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:06:04.436 23:42:19 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.437 1+0 records in 00:06:04.437 1+0 records out 00:06:04.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206581 s, 19.8 MB/s 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:06:04.437 23:42:19 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:06:04.437 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.437 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.437 23:42:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.437 23:42:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.437 23:42:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.437 23:42:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.437 { 00:06:04.437 "nbd_device": "/dev/nbd0", 00:06:04.437 "bdev_name": "Malloc0" 00:06:04.437 }, 00:06:04.437 { 00:06:04.437 "nbd_device": "/dev/nbd1", 00:06:04.437 "bdev_name": "Malloc1" 00:06:04.437 } 00:06:04.437 ]' 00:06:04.437 23:42:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.437 { 00:06:04.437 "nbd_device": "/dev/nbd0", 00:06:04.437 "bdev_name": "Malloc0" 00:06:04.437 }, 00:06:04.437 { 00:06:04.437 "nbd_device": "/dev/nbd1", 00:06:04.437 "bdev_name": "Malloc1" 00:06:04.437 } 00:06:04.437 ]' 00:06:04.437 23:42:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.698 /dev/nbd1' 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.698 /dev/nbd1' 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.698 256+0 records in 00:06:04.698 256+0 records out 00:06:04.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121804 s, 86.1 MB/s 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.698 256+0 records in 00:06:04.698 256+0 records out 00:06:04.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159254 s, 65.8 MB/s 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.698 256+0 records in 00:06:04.698 256+0 records out 00:06:04.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173939 s, 60.3 MB/s 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.698 23:42:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.959 23:42:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.219 23:42:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.219 23:42:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.479 23:42:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.479 [2024-07-15 23:42:20.562357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.479 [2024-07-15 23:42:20.627051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.479 [2024-07-15 23:42:20.627054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.479 [2024-07-15 23:42:20.658488] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.480 [2024-07-15 23:42:20.658522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.863 23:42:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 238329 /var/tmp/spdk-nbd.sock 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 238329 ']' 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:06:08.863 23:42:23 event.app_repeat -- event/event.sh@39 -- # killprocess 238329 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@942 -- # '[' -z 238329 ']' 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@946 -- # kill -0 238329 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@947 -- # uname 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 238329 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@960 -- # echo 'killing process with pid 238329' 00:06:08.863 killing process with pid 238329 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@961 -- # kill 238329 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@966 -- # wait 238329 00:06:08.863 spdk_app_start is called in Round 0. 00:06:08.863 Shutdown signal received, stop current app iteration 00:06:08.863 Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 reinitialization... 00:06:08.863 spdk_app_start is called in Round 1. 00:06:08.863 Shutdown signal received, stop current app iteration 00:06:08.863 Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 reinitialization... 00:06:08.863 spdk_app_start is called in Round 2. 00:06:08.863 Shutdown signal received, stop current app iteration 00:06:08.863 Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 reinitialization... 00:06:08.863 spdk_app_start is called in Round 3. 00:06:08.863 Shutdown signal received, stop current app iteration 00:06:08.863 23:42:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:08.863 23:42:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:08.863 00:06:08.863 real 0m15.597s 00:06:08.863 user 0m33.669s 00:06:08.863 sys 0m2.118s 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:08.863 23:42:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.863 ************************************ 00:06:08.863 END TEST app_repeat 00:06:08.863 ************************************ 00:06:08.863 23:42:23 event -- common/autotest_common.sh@1136 -- # return 0 00:06:08.863 23:42:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:08.863 23:42:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:08.863 23:42:23 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:08.863 23:42:23 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:08.863 23:42:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.863 ************************************ 00:06:08.863 START TEST cpu_locks 00:06:08.863 ************************************ 00:06:08.863 23:42:23 event.cpu_locks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:08.863 * Looking for test storage... 00:06:08.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:08.863 23:42:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:08.863 23:42:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:08.863 23:42:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:08.863 23:42:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:08.863 23:42:23 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:08.863 23:42:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:08.863 23:42:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.863 ************************************ 00:06:08.863 START TEST default_locks 00:06:08.863 ************************************ 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1117 -- # default_locks 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=242106 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 242106 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 242106 ']' 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:08.863 23:42:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.863 [2024-07-15 23:42:24.024748] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:08.863 [2024-07-15 23:42:24.024816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242106 ] 00:06:09.124 [2024-07-15 23:42:24.095919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.124 [2024-07-15 23:42:24.170910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.693 23:42:24 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:09.693 23:42:24 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 0 00:06:09.693 23:42:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 242106 00:06:09.693 23:42:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 242106 00:06:09.693 23:42:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.264 lslocks: write error 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 242106 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@942 -- # '[' -z 242106 ']' 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # kill -0 242106 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # uname 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 242106 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 242106' 00:06:10.264 killing process with pid 242106 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@961 -- # kill 242106 00:06:10.264 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # wait 242106 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 242106 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # local es=0 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 242106 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # waitforlisten 242106 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 242106 ']' 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (242106) - No such process 00:06:10.524 ERROR: process (pid: 242106) is no longer running 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:10.524 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 1 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # es=1 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.525 00:06:10.525 real 0m1.664s 00:06:10.525 user 0m1.751s 00:06:10.525 sys 0m0.553s 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:10.525 23:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.525 ************************************ 00:06:10.525 END TEST default_locks 00:06:10.525 ************************************ 00:06:10.525 23:42:25 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:10.525 23:42:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:10.525 23:42:25 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:10.525 23:42:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:10.525 23:42:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.525 ************************************ 00:06:10.525 START TEST default_locks_via_rpc 00:06:10.525 ************************************ 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1117 -- # default_locks_via_rpc 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=242499 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 242499 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 242499 ']' 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:10.525 23:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.785 [2024-07-15 23:42:25.761045] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:10.785 [2024-07-15 23:42:25.761102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242499 ] 00:06:10.785 [2024-07-15 23:42:25.831309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.785 [2024-07-15 23:42:25.905831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 242499 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 242499 00:06:11.355 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 242499 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@942 -- # '[' -z 242499 ']' 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # kill -0 242499 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # uname 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 242499 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 242499' 00:06:11.925 killing process with pid 242499 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@961 -- # kill 242499 00:06:11.925 23:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # wait 242499 00:06:12.185 00:06:12.185 real 0m1.473s 00:06:12.185 user 0m1.537s 00:06:12.185 sys 0m0.493s 00:06:12.185 23:42:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:12.185 23:42:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.185 ************************************ 00:06:12.185 END TEST default_locks_via_rpc 00:06:12.185 ************************************ 00:06:12.185 23:42:27 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:12.185 23:42:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:12.185 23:42:27 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:12.185 23:42:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:12.185 23:42:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.185 ************************************ 00:06:12.185 START TEST non_locking_app_on_locked_coremask 00:06:12.185 ************************************ 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # non_locking_app_on_locked_coremask 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=242796 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 242796 /var/tmp/spdk.sock 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 242796 ']' 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:12.185 23:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.185 [2024-07-15 23:42:27.293022] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:12.185 [2024-07-15 23:42:27.293073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid242796 ] 00:06:12.185 [2024-07-15 23:42:27.357091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.446 [2024-07-15 23:42:27.421394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=243107 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 243107 /var/tmp/spdk2.sock 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 243107 ']' 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:13.018 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.018 [2024-07-15 23:42:28.124411] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:13.018 [2024-07-15 23:42:28.124462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243107 ] 00:06:13.278 [2024-07-15 23:42:28.224778] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.278 [2024-07-15 23:42:28.224809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.278 [2024-07-15 23:42:28.354007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.848 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:13.848 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:13.848 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 242796 00:06:13.848 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 242796 00:06:13.848 23:42:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.416 lslocks: write error 00:06:14.416 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 242796 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 242796 ']' 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 242796 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 242796 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 242796' 00:06:14.417 killing process with pid 242796 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 242796 00:06:14.417 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 242796 00:06:14.675 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 243107 00:06:14.675 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 243107 ']' 00:06:14.675 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 243107 00:06:14.675 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:14.675 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:14.675 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 243107 00:06:14.935 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:14.935 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:14.935 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 243107' 00:06:14.935 killing process with pid 243107 00:06:14.935 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 243107 00:06:14.935 23:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 243107 00:06:14.935 00:06:14.935 real 0m2.857s 00:06:14.935 user 0m3.132s 00:06:14.935 sys 0m0.813s 00:06:14.935 23:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:14.935 23:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.935 ************************************ 00:06:14.935 END TEST non_locking_app_on_locked_coremask 00:06:14.935 ************************************ 00:06:15.194 23:42:30 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:15.194 23:42:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.194 23:42:30 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:15.194 23:42:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:15.194 23:42:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.194 ************************************ 00:06:15.194 START TEST locking_app_on_unlocked_coremask 00:06:15.194 ************************************ 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_unlocked_coremask 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=243482 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 243482 /var/tmp/spdk.sock 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 243482 ']' 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:15.194 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.194 [2024-07-15 23:42:30.240415] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:15.194 [2024-07-15 23:42:30.240469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243482 ] 00:06:15.194 [2024-07-15 23:42:30.308661] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.194 [2024-07-15 23:42:30.308692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.194 [2024-07-15 23:42:30.380748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=243693 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 243693 /var/tmp/spdk2.sock 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 243693 ']' 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:16.131 23:42:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.131 [2024-07-15 23:42:31.049320] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:16.131 [2024-07-15 23:42:31.049377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid243693 ] 00:06:16.131 [2024-07-15 23:42:31.148778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.131 [2024-07-15 23:42:31.278245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.698 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:16.698 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:16.698 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 243693 00:06:16.698 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 243693 00:06:16.698 23:42:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.957 lslocks: write error 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 243482 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 243482 ']' 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 243482 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 243482 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 243482' 00:06:16.957 killing process with pid 243482 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 243482 00:06:16.957 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 243482 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 243693 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 243693 ']' 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 243693 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 243693 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 243693' 00:06:17.525 killing process with pid 243693 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 243693 00:06:17.525 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 243693 00:06:17.785 00:06:17.785 real 0m2.606s 00:06:17.785 user 0m2.839s 00:06:17.785 sys 0m0.767s 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.785 ************************************ 00:06:17.785 END TEST locking_app_on_unlocked_coremask 00:06:17.785 ************************************ 00:06:17.785 23:42:32 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:17.785 23:42:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:17.785 23:42:32 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:17.785 23:42:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:17.785 23:42:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.785 ************************************ 00:06:17.785 START TEST locking_app_on_locked_coremask 00:06:17.785 ************************************ 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_locked_coremask 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=244169 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 244169 /var/tmp/spdk.sock 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 244169 ']' 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:17.785 23:42:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.785 [2024-07-15 23:42:32.919072] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:17.785 [2024-07-15 23:42:32.919126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244169 ] 00:06:18.045 [2024-07-15 23:42:32.987952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.045 [2024-07-15 23:42:33.062130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=244197 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 244197 /var/tmp/spdk2.sock 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # local es=0 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 244197 /var/tmp/spdk2.sock 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # waitforlisten 244197 /var/tmp/spdk2.sock 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 244197 ']' 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:18.615 23:42:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.615 [2024-07-15 23:42:33.727245] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:18.615 [2024-07-15 23:42:33.727299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244197 ] 00:06:18.875 [2024-07-15 23:42:33.824528] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 244169 has claimed it. 00:06:18.875 [2024-07-15 23:42:33.824566] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (244197) - No such process 00:06:19.445 ERROR: process (pid: 244197) is no longer running 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 1 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # es=1 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 244169 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 244169 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.445 lslocks: write error 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 244169 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 244169 ']' 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 244169 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 244169 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 244169' 00:06:19.445 killing process with pid 244169 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 244169 00:06:19.445 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 244169 00:06:19.705 00:06:19.705 real 0m1.931s 00:06:19.705 user 0m2.140s 00:06:19.705 sys 0m0.519s 00:06:19.705 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:19.705 23:42:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.705 ************************************ 00:06:19.705 END TEST locking_app_on_locked_coremask 00:06:19.705 ************************************ 00:06:19.705 23:42:34 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:19.705 23:42:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:19.706 23:42:34 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:19.706 23:42:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:19.706 23:42:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.706 ************************************ 00:06:19.706 START TEST locking_overlapped_coremask 00:06:19.706 ************************************ 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=244556 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 244556 /var/tmp/spdk.sock 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 244556 ']' 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:19.706 23:42:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.966 [2024-07-15 23:42:34.922067] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:19.966 [2024-07-15 23:42:34.922117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244556 ] 00:06:19.966 [2024-07-15 23:42:34.988457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.966 [2024-07-15 23:42:35.055993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.966 [2024-07-15 23:42:35.056108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.966 [2024-07-15 23:42:35.056111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=244647 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 244647 /var/tmp/spdk2.sock 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # local es=0 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 244647 /var/tmp/spdk2.sock 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # waitforlisten 244647 /var/tmp/spdk2.sock 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 244647 ']' 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:20.537 23:42:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.798 [2024-07-15 23:42:35.750721] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:20.798 [2024-07-15 23:42:35.750777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244647 ] 00:06:20.798 [2024-07-15 23:42:35.829177] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 244556 has claimed it. 00:06:20.798 [2024-07-15 23:42:35.829207] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (244647) - No such process 00:06:21.369 ERROR: process (pid: 244647) is no longer running 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 1 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # es=1 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 244556 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@942 -- # '[' -z 244556 ']' 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # kill -0 244556 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # uname 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 244556 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 244556' 00:06:21.369 killing process with pid 244556 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@961 -- # kill 244556 00:06:21.369 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # wait 244556 00:06:21.629 00:06:21.629 real 0m1.758s 00:06:21.629 user 0m4.984s 00:06:21.629 sys 0m0.363s 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.629 ************************************ 00:06:21.629 END TEST locking_overlapped_coremask 00:06:21.629 ************************************ 00:06:21.629 23:42:36 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:21.629 23:42:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.629 23:42:36 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:21.629 23:42:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:21.629 23:42:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.629 ************************************ 00:06:21.629 START TEST locking_overlapped_coremask_via_rpc 00:06:21.629 ************************************ 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask_via_rpc 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=244928 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 244928 /var/tmp/spdk.sock 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 244928 ']' 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:21.629 23:42:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.629 [2024-07-15 23:42:36.755908] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:21.629 [2024-07-15 23:42:36.755961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244928 ] 00:06:21.889 [2024-07-15 23:42:36.824457] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.890 [2024-07-15 23:42:36.824489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.890 [2024-07-15 23:42:36.894065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.890 [2024-07-15 23:42:36.894203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.890 [2024-07-15 23:42:36.894206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=245089 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 245089 /var/tmp/spdk2.sock 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 245089 ']' 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:22.460 23:42:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.460 [2024-07-15 23:42:37.591386] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:22.460 [2024-07-15 23:42:37.591439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245089 ] 00:06:22.722 [2024-07-15 23:42:37.673382] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.722 [2024-07-15 23:42:37.673408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.722 [2024-07-15 23:42:37.779271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.722 [2024-07-15 23:42:37.782351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.722 [2024-07-15 23:42:37.782353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # local es=0 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.294 [2024-07-15 23:42:38.355290] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 244928 has claimed it. 00:06:23.294 request: 00:06:23.294 { 00:06:23.294 "method": "framework_enable_cpumask_locks", 00:06:23.294 "req_id": 1 00:06:23.294 } 00:06:23.294 Got JSON-RPC error response 00:06:23.294 response: 00:06:23.294 { 00:06:23.294 "code": -32603, 00:06:23.294 "message": "Failed to claim CPU core: 2" 00:06:23.294 } 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # es=1 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 244928 /var/tmp/spdk.sock 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 244928 ']' 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:23.294 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 245089 /var/tmp/spdk2.sock 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 245089 ']' 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.555 00:06:23.555 real 0m1.998s 00:06:23.555 user 0m0.758s 00:06:23.555 sys 0m0.167s 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:23.555 23:42:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.555 ************************************ 00:06:23.555 END TEST locking_overlapped_coremask_via_rpc 00:06:23.555 ************************************ 00:06:23.555 23:42:38 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:23.555 23:42:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:23.555 23:42:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 244928 ]] 00:06:23.555 23:42:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 244928 00:06:23.555 23:42:38 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 244928 ']' 00:06:23.555 23:42:38 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 244928 00:06:23.555 23:42:38 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:06:23.555 23:42:38 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:23.555 23:42:38 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 244928 00:06:23.816 23:42:38 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:23.816 23:42:38 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:23.816 23:42:38 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 244928' 00:06:23.816 killing process with pid 244928 00:06:23.816 23:42:38 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 244928 00:06:23.816 23:42:38 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 244928 00:06:23.816 23:42:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 245089 ]] 00:06:23.816 23:42:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 245089 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 245089 ']' 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 245089 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 245089 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 245089' 00:06:24.077 killing process with pid 245089 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 245089 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 245089 00:06:24.077 23:42:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.077 23:42:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.077 23:42:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 244928 ]] 00:06:24.077 23:42:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 244928 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 244928 ']' 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 244928 00:06:24.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (244928) - No such process 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 244928 is not found' 00:06:24.077 Process with pid 244928 is not found 00:06:24.077 23:42:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 245089 ]] 00:06:24.077 23:42:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 245089 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 245089 ']' 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 245089 00:06:24.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (245089) - No such process 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 245089 is not found' 00:06:24.077 Process with pid 245089 is not found 00:06:24.077 23:42:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.077 00:06:24.077 real 0m15.433s 00:06:24.077 user 0m26.634s 00:06:24.077 sys 0m4.551s 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:24.077 23:42:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.077 ************************************ 00:06:24.077 END TEST cpu_locks 00:06:24.077 ************************************ 00:06:24.339 23:42:39 event -- common/autotest_common.sh@1136 -- # return 0 00:06:24.339 00:06:24.339 real 0m39.463s 00:06:24.339 user 1m14.899s 00:06:24.339 sys 0m7.621s 00:06:24.339 23:42:39 event -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:24.339 23:42:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.339 ************************************ 00:06:24.339 END TEST event 00:06:24.339 ************************************ 00:06:24.339 23:42:39 -- common/autotest_common.sh@1136 -- # return 0 00:06:24.339 23:42:39 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:24.339 23:42:39 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:24.339 23:42:39 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:24.339 23:42:39 -- common/autotest_common.sh@10 -- # set +x 00:06:24.339 ************************************ 00:06:24.339 START TEST thread 00:06:24.339 ************************************ 00:06:24.339 23:42:39 thread -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:24.339 * Looking for test storage... 00:06:24.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:24.339 23:42:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.339 23:42:39 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:06:24.339 23:42:39 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:24.339 23:42:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.339 ************************************ 00:06:24.339 START TEST thread_poller_perf 00:06:24.339 ************************************ 00:06:24.339 23:42:39 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.601 [2024-07-15 23:42:39.537733] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:24.601 [2024-07-15 23:42:39.537830] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245661 ] 00:06:24.601 [2024-07-15 23:42:39.614937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.601 [2024-07-15 23:42:39.689737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.601 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:25.983 ====================================== 00:06:25.983 busy:2407757564 (cyc) 00:06:25.983 total_run_count: 287000 00:06:25.983 tsc_hz: 2400000000 (cyc) 00:06:25.983 ====================================== 00:06:25.983 poller_cost: 8389 (cyc), 3495 (nsec) 00:06:25.983 00:06:25.983 real 0m1.236s 00:06:25.983 user 0m1.144s 00:06:25.983 sys 0m0.086s 00:06:25.983 23:42:40 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:25.983 23:42:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.983 ************************************ 00:06:25.983 END TEST thread_poller_perf 00:06:25.983 ************************************ 00:06:25.983 23:42:40 thread -- common/autotest_common.sh@1136 -- # return 0 00:06:25.983 23:42:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.983 23:42:40 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:06:25.983 23:42:40 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:25.983 23:42:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.983 ************************************ 00:06:25.983 START TEST thread_poller_perf 00:06:25.983 ************************************ 00:06:25.983 23:42:40 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.983 [2024-07-15 23:42:40.843575] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:25.983 [2024-07-15 23:42:40.843686] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid245815 ] 00:06:25.983 [2024-07-15 23:42:40.913521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.983 [2024-07-15 23:42:40.981586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.983 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.924 ====================================== 00:06:26.925 busy:2401806174 (cyc) 00:06:26.925 total_run_count: 3810000 00:06:26.925 tsc_hz: 2400000000 (cyc) 00:06:26.925 ====================================== 00:06:26.925 poller_cost: 630 (cyc), 262 (nsec) 00:06:26.925 00:06:26.925 real 0m1.214s 00:06:26.925 user 0m1.131s 00:06:26.925 sys 0m0.079s 00:06:26.925 23:42:42 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:26.925 23:42:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.925 ************************************ 00:06:26.925 END TEST thread_poller_perf 00:06:26.925 ************************************ 00:06:26.925 23:42:42 thread -- common/autotest_common.sh@1136 -- # return 0 00:06:26.925 23:42:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.925 00:06:26.925 real 0m2.693s 00:06:26.925 user 0m2.359s 00:06:26.925 sys 0m0.340s 00:06:26.925 23:42:42 thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:26.925 23:42:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.925 ************************************ 00:06:26.925 END TEST thread 00:06:26.925 ************************************ 00:06:26.925 23:42:42 -- common/autotest_common.sh@1136 -- # return 0 00:06:26.925 23:42:42 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:26.925 23:42:42 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:26.925 23:42:42 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:26.925 23:42:42 -- common/autotest_common.sh@10 -- # set +x 00:06:27.186 ************************************ 00:06:27.186 START TEST accel 00:06:27.186 ************************************ 00:06:27.186 23:42:42 accel -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:27.186 * Looking for test storage... 00:06:27.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:27.186 23:42:42 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:27.186 23:42:42 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:27.186 23:42:42 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.186 23:42:42 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=246131 00:06:27.186 23:42:42 accel -- accel/accel.sh@63 -- # waitforlisten 246131 00:06:27.186 23:42:42 accel -- common/autotest_common.sh@823 -- # '[' -z 246131 ']' 00:06:27.186 23:42:42 accel -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.186 23:42:42 accel -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:27.186 23:42:42 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:27.186 23:42:42 accel -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.186 23:42:42 accel -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:27.186 23:42:42 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:27.186 23:42:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.186 23:42:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.186 23:42:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.186 23:42:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.186 23:42:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.186 23:42:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.186 23:42:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:27.186 23:42:42 accel -- accel/accel.sh@41 -- # jq -r . 00:06:27.186 [2024-07-15 23:42:42.298989] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:27.186 [2024-07-15 23:42:42.299063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246131 ] 00:06:27.186 [2024-07-15 23:42:42.374315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.446 [2024-07-15 23:42:42.451223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@856 -- # return 0 00:06:28.018 23:42:43 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:28.018 23:42:43 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:28.018 23:42:43 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:28.018 23:42:43 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:28.018 23:42:43 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:28.018 23:42:43 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:28.018 23:42:43 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:28.018 23:42:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:28.018 23:42:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:28.018 23:42:43 accel -- accel/accel.sh@75 -- # killprocess 246131 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@942 -- # '[' -z 246131 ']' 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@946 -- # kill -0 246131 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@947 -- # uname 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 246131 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@960 -- # echo 'killing process with pid 246131' 00:06:28.018 killing process with pid 246131 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@961 -- # kill 246131 00:06:28.018 23:42:43 accel -- common/autotest_common.sh@966 -- # wait 246131 00:06:28.278 23:42:43 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:28.278 23:42:43 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:28.278 23:42:43 accel -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:28.278 23:42:43 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:28.278 23:42:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.278 23:42:43 accel.accel_help -- common/autotest_common.sh@1117 -- # accel_perf -h 00:06:28.278 23:42:43 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:28.278 23:42:43 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:28.278 23:42:43 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.278 23:42:43 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.278 23:42:43 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.278 23:42:43 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.278 23:42:43 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.278 23:42:43 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:28.278 23:42:43 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:28.278 23:42:43 accel.accel_help -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:28.278 23:42:43 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:28.537 23:42:43 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:28.537 23:42:43 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:28.537 23:42:43 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:28.537 23:42:43 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:28.537 23:42:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.537 ************************************ 00:06:28.537 START TEST accel_missing_filename 00:06:28.537 ************************************ 00:06:28.537 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress 00:06:28.537 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # local es=0 00:06:28.537 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:28.537 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:28.537 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:28.537 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:28.537 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:28.537 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress 00:06:28.537 23:42:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:28.537 23:42:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:28.537 23:42:43 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.537 23:42:43 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.537 23:42:43 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.537 23:42:43 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.537 23:42:43 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.537 23:42:43 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:28.537 23:42:43 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:28.537 [2024-07-15 23:42:43.568924] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:28.537 [2024-07-15 23:42:43.569026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246499 ] 00:06:28.537 [2024-07-15 23:42:43.642816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.537 [2024-07-15 23:42:43.706314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.798 [2024-07-15 23:42:43.738382] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.798 [2024-07-15 23:42:43.775489] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:28.798 A filename is required. 00:06:28.798 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # es=234 00:06:28.798 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:28.798 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # es=106 00:06:28.798 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # case "$es" in 00:06:28.798 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=1 00:06:28.798 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:28.798 00:06:28.798 real 0m0.290s 00:06:28.798 user 0m0.227s 00:06:28.798 sys 0m0.104s 00:06:28.798 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:28.798 23:42:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:28.798 ************************************ 00:06:28.798 END TEST accel_missing_filename 00:06:28.798 ************************************ 00:06:28.798 23:42:43 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:28.798 23:42:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.798 23:42:43 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:06:28.798 23:42:43 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:28.798 23:42:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.798 ************************************ 00:06:28.798 START TEST accel_compress_verify 00:06:28.798 ************************************ 00:06:28.798 23:42:43 accel.accel_compress_verify -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.798 23:42:43 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # local es=0 00:06:28.798 23:42:43 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.798 23:42:43 accel.accel_compress_verify -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:28.798 23:42:43 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:28.798 23:42:43 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:28.798 23:42:43 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:28.798 23:42:43 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.798 23:42:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:28.798 23:42:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:28.798 23:42:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.798 23:42:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.798 23:42:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.798 23:42:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.798 23:42:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.798 23:42:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:28.798 23:42:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:28.798 [2024-07-15 23:42:43.933909] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:28.798 [2024-07-15 23:42:43.933999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246524 ] 00:06:29.059 [2024-07-15 23:42:44.012883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.059 [2024-07-15 23:42:44.082931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.059 [2024-07-15 23:42:44.115145] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:29.059 [2024-07-15 23:42:44.152701] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:29.059 00:06:29.059 Compression does not support the verify option, aborting. 00:06:29.059 23:42:44 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # es=161 00:06:29.059 23:42:44 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:29.059 23:42:44 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # es=33 00:06:29.059 23:42:44 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # case "$es" in 00:06:29.059 23:42:44 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=1 00:06:29.059 23:42:44 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:29.059 00:06:29.059 real 0m0.303s 00:06:29.059 user 0m0.224s 00:06:29.059 sys 0m0.118s 00:06:29.059 23:42:44 accel.accel_compress_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:29.059 23:42:44 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:29.059 ************************************ 00:06:29.059 END TEST accel_compress_verify 00:06:29.059 ************************************ 00:06:29.059 23:42:44 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:29.059 23:42:44 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:29.059 23:42:44 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:29.059 23:42:44 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:29.059 23:42:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.319 ************************************ 00:06:29.319 START TEST accel_wrong_workload 00:06:29.319 ************************************ 00:06:29.319 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w foobar 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # local es=0 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w foobar 00:06:29.320 23:42:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:29.320 23:42:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:29.320 23:42:44 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.320 23:42:44 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.320 23:42:44 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.320 23:42:44 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.320 23:42:44 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.320 23:42:44 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:29.320 23:42:44 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:29.320 Unsupported workload type: foobar 00:06:29.320 [2024-07-15 23:42:44.309208] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:29.320 accel_perf options: 00:06:29.320 [-h help message] 00:06:29.320 [-q queue depth per core] 00:06:29.320 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:29.320 [-T number of threads per core 00:06:29.320 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:29.320 [-t time in seconds] 00:06:29.320 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:29.320 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:29.320 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:29.320 [-l for compress/decompress workloads, name of uncompressed input file 00:06:29.320 [-S for crc32c workload, use this seed value (default 0) 00:06:29.320 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:29.320 [-f for fill workload, use this BYTE value (default 255) 00:06:29.320 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:29.320 [-y verify result if this switch is on] 00:06:29.320 [-a tasks to allocate per core (default: same value as -q)] 00:06:29.320 Can be used to spread operations across a wider range of memory. 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # es=1 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:29.320 00:06:29.320 real 0m0.035s 00:06:29.320 user 0m0.022s 00:06:29.320 sys 0m0.013s 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:29.320 23:42:44 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:29.320 ************************************ 00:06:29.320 END TEST accel_wrong_workload 00:06:29.320 ************************************ 00:06:29.320 Error: writing output failed: Broken pipe 00:06:29.320 23:42:44 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:29.320 23:42:44 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:29.320 23:42:44 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:06:29.320 23:42:44 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:29.320 23:42:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.320 ************************************ 00:06:29.320 START TEST accel_negative_buffers 00:06:29.320 ************************************ 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # local es=0 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w xor -y -x -1 00:06:29.320 23:42:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:29.320 23:42:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:29.320 23:42:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.320 23:42:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.320 23:42:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.320 23:42:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.320 23:42:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.320 23:42:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:29.320 23:42:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:29.320 -x option must be non-negative. 00:06:29.320 [2024-07-15 23:42:44.422633] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:29.320 accel_perf options: 00:06:29.320 [-h help message] 00:06:29.320 [-q queue depth per core] 00:06:29.320 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:29.320 [-T number of threads per core 00:06:29.320 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:29.320 [-t time in seconds] 00:06:29.320 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:29.320 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:29.320 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:29.320 [-l for compress/decompress workloads, name of uncompressed input file 00:06:29.320 [-S for crc32c workload, use this seed value (default 0) 00:06:29.320 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:29.320 [-f for fill workload, use this BYTE value (default 255) 00:06:29.320 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:29.320 [-y verify result if this switch is on] 00:06:29.320 [-a tasks to allocate per core (default: same value as -q)] 00:06:29.320 Can be used to spread operations across a wider range of memory. 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # es=1 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:29.320 00:06:29.320 real 0m0.037s 00:06:29.320 user 0m0.023s 00:06:29.320 sys 0m0.014s 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:29.320 23:42:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:29.320 ************************************ 00:06:29.320 END TEST accel_negative_buffers 00:06:29.320 ************************************ 00:06:29.320 Error: writing output failed: Broken pipe 00:06:29.320 23:42:44 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:29.320 23:42:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:29.320 23:42:44 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:29.320 23:42:44 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:29.320 23:42:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.320 ************************************ 00:06:29.320 START TEST accel_crc32c 00:06:29.320 ************************************ 00:06:29.320 23:42:44 accel.accel_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:29.320 23:42:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:29.320 23:42:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:29.320 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.320 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:29.581 [2024-07-15 23:42:44.534317] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:29.581 [2024-07-15 23:42:44.534401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246822 ] 00:06:29.581 [2024-07-15 23:42:44.606251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.581 [2024-07-15 23:42:44.680349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.581 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.582 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.582 23:42:44 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.582 23:42:44 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.582 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.582 23:42:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:30.967 23:42:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.967 00:06:30.967 real 0m1.303s 00:06:30.967 user 0m1.193s 00:06:30.967 sys 0m0.122s 00:06:30.967 23:42:45 accel.accel_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:30.967 23:42:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:30.967 ************************************ 00:06:30.967 END TEST accel_crc32c 00:06:30.967 ************************************ 00:06:30.967 23:42:45 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:30.967 23:42:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:30.967 23:42:45 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:30.967 23:42:45 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:30.967 23:42:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.967 ************************************ 00:06:30.967 START TEST accel_crc32c_C2 00:06:30.967 ************************************ 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.967 23:42:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:30.967 [2024-07-15 23:42:45.913684] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:30.967 [2024-07-15 23:42:45.913749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247014 ] 00:06:30.967 [2024-07-15 23:42:45.983385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.967 [2024-07-15 23:42:46.055052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.967 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.968 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.968 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.968 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:30.968 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:30.968 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.968 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:30.968 23:42:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.352 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.352 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.352 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.352 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.352 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.353 00:06:32.353 real 0m1.300s 00:06:32.353 user 0m1.198s 00:06:32.353 sys 0m0.113s 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:32.353 23:42:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 ************************************ 00:06:32.353 END TEST accel_crc32c_C2 00:06:32.353 ************************************ 00:06:32.353 23:42:47 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:32.353 23:42:47 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:32.353 23:42:47 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:32.353 23:42:47 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:32.353 23:42:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.353 ************************************ 00:06:32.353 START TEST accel_copy 00:06:32.353 ************************************ 00:06:32.353 23:42:47 accel.accel_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy -y 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:32.353 [2024-07-15 23:42:47.286868] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:32.353 [2024-07-15 23:42:47.286946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247295 ] 00:06:32.353 [2024-07-15 23:42:47.356252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.353 [2024-07-15 23:42:47.425863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:32.353 23:42:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.362 23:42:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.362 23:42:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.362 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.362 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.362 23:42:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.362 23:42:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.362 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.362 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.362 23:42:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:33.623 23:42:48 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.623 00:06:33.623 real 0m1.296s 00:06:33.623 user 0m1.193s 00:06:33.623 sys 0m0.113s 00:06:33.623 23:42:48 accel.accel_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:33.623 23:42:48 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:33.623 ************************************ 00:06:33.623 END TEST accel_copy 00:06:33.623 ************************************ 00:06:33.623 23:42:48 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:33.623 23:42:48 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:33.623 23:42:48 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:06:33.623 23:42:48 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:33.623 23:42:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.623 ************************************ 00:06:33.623 START TEST accel_fill 00:06:33.623 ************************************ 00:06:33.623 23:42:48 accel.accel_fill -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:33.623 23:42:48 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:33.623 [2024-07-15 23:42:48.659093] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:33.623 [2024-07-15 23:42:48.659156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247647 ] 00:06:33.623 [2024-07-15 23:42:48.727627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.623 [2024-07-15 23:42:48.794289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:33.891 23:42:48 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:34.835 23:42:49 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.835 00:06:34.835 real 0m1.294s 00:06:34.835 user 0m1.200s 00:06:34.835 sys 0m0.105s 00:06:34.835 23:42:49 accel.accel_fill -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:34.835 23:42:49 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:34.835 ************************************ 00:06:34.835 END TEST accel_fill 00:06:34.835 ************************************ 00:06:34.835 23:42:49 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:34.835 23:42:49 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:34.835 23:42:49 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:34.835 23:42:49 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:34.835 23:42:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.835 ************************************ 00:06:34.835 START TEST accel_copy_crc32c 00:06:34.835 ************************************ 00:06:34.835 23:42:49 accel.accel_copy_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y 00:06:34.835 23:42:49 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:34.835 23:42:49 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:34.835 23:42:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.835 23:42:49 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.835 23:42:49 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:34.835 23:42:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:34.835 23:42:49 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:34.835 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.835 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.835 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.835 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.835 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.835 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:34.835 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:35.097 [2024-07-15 23:42:50.026819] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:35.097 [2024-07-15 23:42:50.026914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid247997 ] 00:06:35.097 [2024-07-15 23:42:50.095611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.097 [2024-07-15 23:42:50.164518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.097 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.098 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.098 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.098 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.098 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.098 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.098 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.098 23:42:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.484 00:06:36.484 real 0m1.298s 00:06:36.484 user 0m1.196s 00:06:36.484 sys 0m0.113s 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:36.484 23:42:51 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:36.484 ************************************ 00:06:36.484 END TEST accel_copy_crc32c 00:06:36.484 ************************************ 00:06:36.484 23:42:51 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:36.484 23:42:51 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:36.484 23:42:51 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:36.484 23:42:51 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:36.484 23:42:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.484 ************************************ 00:06:36.484 START TEST accel_copy_crc32c_C2 00:06:36.484 ************************************ 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:36.484 [2024-07-15 23:42:51.397104] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:36.484 [2024-07-15 23:42:51.397194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248317 ] 00:06:36.484 [2024-07-15 23:42:51.467012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.484 [2024-07-15 23:42:51.536087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.484 23:42:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.868 00:06:37.868 real 0m1.297s 00:06:37.868 user 0m1.193s 00:06:37.868 sys 0m0.117s 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:37.868 23:42:52 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:37.868 ************************************ 00:06:37.868 END TEST accel_copy_crc32c_C2 00:06:37.868 ************************************ 00:06:37.868 23:42:52 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:37.868 23:42:52 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:37.868 23:42:52 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:37.868 23:42:52 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:37.868 23:42:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.868 ************************************ 00:06:37.868 START TEST accel_dualcast 00:06:37.868 ************************************ 00:06:37.868 23:42:52 accel.accel_dualcast -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dualcast -y 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:37.868 [2024-07-15 23:42:52.769888] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:37.868 [2024-07-15 23:42:52.769985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248504 ] 00:06:37.868 [2024-07-15 23:42:52.839940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.868 [2024-07-15 23:42:52.909574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.868 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:37.869 23:42:52 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:39.253 23:42:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.253 00:06:39.253 real 0m1.298s 00:06:39.253 user 0m1.195s 00:06:39.253 sys 0m0.115s 00:06:39.253 23:42:54 accel.accel_dualcast -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:39.253 23:42:54 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:39.253 ************************************ 00:06:39.253 END TEST accel_dualcast 00:06:39.253 ************************************ 00:06:39.253 23:42:54 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:39.253 23:42:54 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:39.253 23:42:54 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:39.253 23:42:54 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:39.253 23:42:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.253 ************************************ 00:06:39.253 START TEST accel_compare 00:06:39.253 ************************************ 00:06:39.253 23:42:54 accel.accel_compare -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compare -y 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:39.253 [2024-07-15 23:42:54.143781] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:39.253 [2024-07-15 23:42:54.143842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid248735 ] 00:06:39.253 [2024-07-15 23:42:54.213974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.253 [2024-07-15 23:42:54.282943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.253 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:39.254 23:42:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:40.640 23:42:55 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.640 00:06:40.640 real 0m1.298s 00:06:40.640 user 0m1.196s 00:06:40.640 sys 0m0.112s 00:06:40.640 23:42:55 accel.accel_compare -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:40.640 23:42:55 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:40.640 ************************************ 00:06:40.640 END TEST accel_compare 00:06:40.640 ************************************ 00:06:40.640 23:42:55 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:40.640 23:42:55 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:40.640 23:42:55 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:40.640 23:42:55 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:40.640 23:42:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.640 ************************************ 00:06:40.640 START TEST accel_xor 00:06:40.640 ************************************ 00:06:40.640 23:42:55 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:40.640 [2024-07-15 23:42:55.516225] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:40.640 [2024-07-15 23:42:55.516291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249093 ] 00:06:40.640 [2024-07-15 23:42:55.582636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.640 [2024-07-15 23:42:55.646940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.640 23:42:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.583 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.844 23:42:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.845 00:06:41.845 real 0m1.289s 00:06:41.845 user 0m1.202s 00:06:41.845 sys 0m0.098s 00:06:41.845 23:42:56 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:41.845 23:42:56 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:41.845 ************************************ 00:06:41.845 END TEST accel_xor 00:06:41.845 ************************************ 00:06:41.845 23:42:56 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:41.845 23:42:56 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:41.845 23:42:56 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:41.845 23:42:56 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:41.845 23:42:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.845 ************************************ 00:06:41.845 START TEST accel_xor 00:06:41.845 ************************************ 00:06:41.845 23:42:56 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y -x 3 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:41.845 23:42:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:41.845 [2024-07-15 23:42:56.878907] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:41.845 [2024-07-15 23:42:56.878998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249440 ] 00:06:41.845 [2024-07-15 23:42:56.946156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.845 [2024-07-15 23:42:57.009295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.107 23:42:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:43.051 23:42:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.051 00:06:43.051 real 0m1.288s 00:06:43.051 user 0m1.198s 00:06:43.051 sys 0m0.101s 00:06:43.051 23:42:58 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:43.051 23:42:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:43.051 ************************************ 00:06:43.051 END TEST accel_xor 00:06:43.051 ************************************ 00:06:43.051 23:42:58 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:43.051 23:42:58 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:43.051 23:42:58 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:06:43.051 23:42:58 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:43.051 23:42:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.051 ************************************ 00:06:43.051 START TEST accel_dif_verify 00:06:43.051 ************************************ 00:06:43.051 23:42:58 accel.accel_dif_verify -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_verify 00:06:43.051 23:42:58 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:43.051 23:42:58 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:43.051 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:43.052 23:42:58 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:43.313 [2024-07-15 23:42:58.242085] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:43.313 [2024-07-15 23:42:58.242176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249789 ] 00:06:43.313 [2024-07-15 23:42:58.312177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.313 [2024-07-15 23:42:58.381561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.313 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.313 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.314 23:42:58 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:44.699 23:42:59 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.699 00:06:44.699 real 0m1.299s 00:06:44.699 user 0m1.200s 00:06:44.699 sys 0m0.111s 00:06:44.699 23:42:59 accel.accel_dif_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:44.699 23:42:59 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:44.699 ************************************ 00:06:44.699 END TEST accel_dif_verify 00:06:44.699 ************************************ 00:06:44.699 23:42:59 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:44.699 23:42:59 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:44.699 23:42:59 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:06:44.699 23:42:59 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:44.699 23:42:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.699 ************************************ 00:06:44.699 START TEST accel_dif_generate 00:06:44.699 ************************************ 00:06:44.699 23:42:59 accel.accel_dif_generate -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate 00:06:44.699 23:42:59 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:44.699 23:42:59 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:44.700 [2024-07-15 23:42:59.613676] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:44.700 [2024-07-15 23:42:59.613767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249998 ] 00:06:44.700 [2024-07-15 23:42:59.684015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.700 [2024-07-15 23:42:59.754416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:44.700 23:42:59 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.085 23:43:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.085 23:43:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.085 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.085 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.085 23:43:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.085 23:43:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:46.086 23:43:00 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.086 00:06:46.086 real 0m1.300s 00:06:46.086 user 0m1.202s 00:06:46.086 sys 0m0.110s 00:06:46.086 23:43:00 accel.accel_dif_generate -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:46.086 23:43:00 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:46.086 ************************************ 00:06:46.086 END TEST accel_dif_generate 00:06:46.086 ************************************ 00:06:46.086 23:43:00 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:46.086 23:43:00 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:46.086 23:43:00 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:06:46.086 23:43:00 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:46.086 23:43:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.086 ************************************ 00:06:46.086 START TEST accel_dif_generate_copy 00:06:46.086 ************************************ 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate_copy 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:46.086 23:43:00 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:46.086 [2024-07-15 23:43:00.988508] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:46.086 [2024-07-15 23:43:00.988588] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250198 ] 00:06:46.086 [2024-07-15 23:43:01.059637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.086 [2024-07-15 23:43:01.130084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:46.086 23:43:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.472 00:06:47.472 real 0m1.300s 00:06:47.472 user 0m1.197s 00:06:47.472 sys 0m0.113s 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:47.472 23:43:02 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:47.472 ************************************ 00:06:47.472 END TEST accel_dif_generate_copy 00:06:47.472 ************************************ 00:06:47.472 23:43:02 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:47.472 23:43:02 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:47.472 23:43:02 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.472 23:43:02 accel -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:06:47.472 23:43:02 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:47.472 23:43:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.472 ************************************ 00:06:47.472 START TEST accel_comp 00:06:47.472 ************************************ 00:06:47.472 23:43:02 accel.accel_comp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:47.472 [2024-07-15 23:43:02.362931] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:47.472 [2024-07-15 23:43:02.363026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250529 ] 00:06:47.472 [2024-07-15 23:43:02.437225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.472 [2024-07-15 23:43:02.500793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:47.472 23:43:02 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:48.854 23:43:03 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.854 00:06:48.854 real 0m1.299s 00:06:48.854 user 0m1.201s 00:06:48.854 sys 0m0.111s 00:06:48.854 23:43:03 accel.accel_comp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:48.854 23:43:03 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:48.854 ************************************ 00:06:48.854 END TEST accel_comp 00:06:48.854 ************************************ 00:06:48.854 23:43:03 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:48.854 23:43:03 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.854 23:43:03 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:48.854 23:43:03 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:48.854 23:43:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.854 ************************************ 00:06:48.854 START TEST accel_decomp 00:06:48.854 ************************************ 00:06:48.854 23:43:03 accel.accel_decomp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:48.854 [2024-07-15 23:43:03.735694] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:48.854 [2024-07-15 23:43:03.735789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid250884 ] 00:06:48.854 [2024-07-15 23:43:03.803135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.854 [2024-07-15 23:43:03.866637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:48.854 23:43:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:50.238 23:43:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.238 00:06:50.238 real 0m1.292s 00:06:50.238 user 0m1.205s 00:06:50.238 sys 0m0.099s 00:06:50.238 23:43:04 accel.accel_decomp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:50.238 23:43:04 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:50.238 ************************************ 00:06:50.238 END TEST accel_decomp 00:06:50.238 ************************************ 00:06:50.238 23:43:05 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:50.238 23:43:05 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.238 23:43:05 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:06:50.238 23:43:05 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:50.238 23:43:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.238 ************************************ 00:06:50.238 START TEST accel_decomp_full 00:06:50.238 ************************************ 00:06:50.238 23:43:05 accel.accel_decomp_full -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:50.238 [2024-07-15 23:43:05.101042] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:50.238 [2024-07-15 23:43:05.101129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251234 ] 00:06:50.238 [2024-07-15 23:43:05.169322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.238 [2024-07-15 23:43:05.232560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:50.238 23:43:05 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:51.624 23:43:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.624 00:06:51.624 real 0m1.305s 00:06:51.624 user 0m1.209s 00:06:51.624 sys 0m0.109s 00:06:51.624 23:43:06 accel.accel_decomp_full -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:51.624 23:43:06 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:51.624 ************************************ 00:06:51.624 END TEST accel_decomp_full 00:06:51.624 ************************************ 00:06:51.624 23:43:06 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:51.624 23:43:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:51.624 23:43:06 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:06:51.624 23:43:06 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:51.624 23:43:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.624 ************************************ 00:06:51.624 START TEST accel_decomp_mcore 00:06:51.624 ************************************ 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:51.624 [2024-07-15 23:43:06.479688] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:51.624 [2024-07-15 23:43:06.479784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251489 ] 00:06:51.624 [2024-07-15 23:43:06.549283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.624 [2024-07-15 23:43:06.617946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.624 [2024-07-15 23:43:06.618060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.624 [2024-07-15 23:43:06.618216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.624 [2024-07-15 23:43:06.618217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.624 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:51.625 23:43:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.566 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.826 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.827 00:06:52.827 real 0m1.307s 00:06:52.827 user 0m4.440s 00:06:52.827 sys 0m0.116s 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:52.827 23:43:07 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:52.827 ************************************ 00:06:52.827 END TEST accel_decomp_mcore 00:06:52.827 ************************************ 00:06:52.827 23:43:07 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:52.827 23:43:07 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.827 23:43:07 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:06:52.827 23:43:07 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:52.827 23:43:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.827 ************************************ 00:06:52.827 START TEST accel_decomp_full_mcore 00:06:52.827 ************************************ 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:52.827 23:43:07 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:52.827 [2024-07-15 23:43:07.859900] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:52.827 [2024-07-15 23:43:07.859978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251697 ] 00:06:52.827 [2024-07-15 23:43:07.930309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.827 [2024-07-15 23:43:08.003918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.827 [2024-07-15 23:43:08.004038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.827 [2024-07-15 23:43:08.004199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.827 [2024-07-15 23:43:08.004199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:53.088 23:43:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.028 00:06:54.028 real 0m1.325s 00:06:54.028 user 0m4.495s 00:06:54.028 sys 0m0.118s 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:54.028 23:43:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:54.028 ************************************ 00:06:54.028 END TEST accel_decomp_full_mcore 00:06:54.028 ************************************ 00:06:54.028 23:43:09 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:54.028 23:43:09 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.028 23:43:09 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:06:54.028 23:43:09 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:54.028 23:43:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.289 ************************************ 00:06:54.289 START TEST accel_decomp_mthread 00:06:54.289 ************************************ 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:54.289 [2024-07-15 23:43:09.258949] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:54.289 [2024-07-15 23:43:09.259012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251976 ] 00:06:54.289 [2024-07-15 23:43:09.325536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.289 [2024-07-15 23:43:09.390736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:54.289 23:43:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.674 00:06:55.674 real 0m1.295s 00:06:55.674 user 0m1.193s 00:06:55.674 sys 0m0.114s 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:55.674 23:43:10 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:55.674 ************************************ 00:06:55.674 END TEST accel_decomp_mthread 00:06:55.674 ************************************ 00:06:55.674 23:43:10 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:55.674 23:43:10 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.674 23:43:10 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:06:55.674 23:43:10 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:55.674 23:43:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.674 ************************************ 00:06:55.674 START TEST accel_decomp_full_mthread 00:06:55.674 ************************************ 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:55.674 [2024-07-15 23:43:10.630360] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:55.674 [2024-07-15 23:43:10.630472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252334 ] 00:06:55.674 [2024-07-15 23:43:10.702709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.674 [2024-07-15 23:43:10.768795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:55.674 23:43:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.059 00:06:57.059 real 0m1.328s 00:06:57.059 user 0m1.224s 00:06:57.059 sys 0m0.117s 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:57.059 23:43:11 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:57.059 ************************************ 00:06:57.059 END TEST accel_decomp_full_mthread 00:06:57.059 ************************************ 00:06:57.059 23:43:11 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:57.059 23:43:11 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:57.059 23:43:11 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:57.059 23:43:11 accel -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:57.059 23:43:11 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:57.059 23:43:11 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:57.059 23:43:11 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.059 23:43:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.059 23:43:11 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.059 23:43:11 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.059 23:43:11 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.059 23:43:11 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.059 23:43:11 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:57.059 23:43:11 accel -- accel/accel.sh@41 -- # jq -r . 00:06:57.059 ************************************ 00:06:57.059 START TEST accel_dif_functional_tests 00:06:57.059 ************************************ 00:06:57.059 23:43:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:57.059 [2024-07-15 23:43:12.055862] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:57.059 [2024-07-15 23:43:12.055915] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252682 ] 00:06:57.059 [2024-07-15 23:43:12.121798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.059 [2024-07-15 23:43:12.187290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.059 [2024-07-15 23:43:12.187506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.059 [2024-07-15 23:43:12.187509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.059 00:06:57.059 00:06:57.059 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.059 http://cunit.sourceforge.net/ 00:06:57.059 00:06:57.059 00:06:57.059 Suite: accel_dif 00:06:57.059 Test: verify: DIF generated, GUARD check ...passed 00:06:57.059 Test: verify: DIF generated, APPTAG check ...passed 00:06:57.059 Test: verify: DIF generated, REFTAG check ...passed 00:06:57.059 Test: verify: DIF not generated, GUARD check ...[2024-07-15 23:43:12.242809] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:57.059 passed 00:06:57.059 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 23:43:12.242853] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:57.059 passed 00:06:57.059 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 23:43:12.242874] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:57.059 passed 00:06:57.059 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:57.059 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 23:43:12.242920] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:57.059 passed 00:06:57.059 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:57.059 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:57.059 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:57.059 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 23:43:12.243037] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:57.059 passed 00:06:57.059 Test: verify copy: DIF generated, GUARD check ...passed 00:06:57.059 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:57.059 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:57.059 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 23:43:12.243158] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:57.059 passed 00:06:57.059 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 23:43:12.243182] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:57.059 passed 00:06:57.059 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 23:43:12.243204] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:57.059 passed 00:06:57.059 Test: generate copy: DIF generated, GUARD check ...passed 00:06:57.059 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:57.059 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:57.059 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:57.059 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:57.059 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:57.059 Test: generate copy: iovecs-len validate ...[2024-07-15 23:43:12.243394] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:57.059 passed 00:06:57.059 Test: generate copy: buffer alignment validate ...passed 00:06:57.059 00:06:57.059 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.059 suites 1 1 n/a 0 0 00:06:57.059 tests 26 26 26 0 0 00:06:57.059 asserts 115 115 115 0 n/a 00:06:57.059 00:06:57.059 Elapsed time = 0.002 seconds 00:06:57.319 00:06:57.319 real 0m0.352s 00:06:57.319 user 0m0.487s 00:06:57.319 sys 0m0.129s 00:06:57.319 23:43:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:57.319 23:43:12 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:57.319 ************************************ 00:06:57.319 END TEST accel_dif_functional_tests 00:06:57.319 ************************************ 00:06:57.319 23:43:12 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:57.319 00:06:57.319 real 0m30.253s 00:06:57.319 user 0m33.734s 00:06:57.319 sys 0m4.280s 00:06:57.319 23:43:12 accel -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:57.319 23:43:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.319 ************************************ 00:06:57.319 END TEST accel 00:06:57.319 ************************************ 00:06:57.319 23:43:12 -- common/autotest_common.sh@1136 -- # return 0 00:06:57.319 23:43:12 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:57.319 23:43:12 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:57.319 23:43:12 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:57.319 23:43:12 -- common/autotest_common.sh@10 -- # set +x 00:06:57.319 ************************************ 00:06:57.319 START TEST accel_rpc 00:06:57.319 ************************************ 00:06:57.319 23:43:12 accel_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:57.580 * Looking for test storage... 00:06:57.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:57.580 23:43:12 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:57.580 23:43:12 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=252748 00:06:57.580 23:43:12 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 252748 00:06:57.580 23:43:12 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:57.580 23:43:12 accel_rpc -- common/autotest_common.sh@823 -- # '[' -z 252748 ']' 00:06:57.580 23:43:12 accel_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.580 23:43:12 accel_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:57.580 23:43:12 accel_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.580 23:43:12 accel_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:57.580 23:43:12 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.580 [2024-07-15 23:43:12.629716] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:57.580 [2024-07-15 23:43:12.629772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252748 ] 00:06:57.580 [2024-07-15 23:43:12.695914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.580 [2024-07-15 23:43:12.760227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:58.519 23:43:13 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:58.519 23:43:13 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:58.519 23:43:13 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:58.519 23:43:13 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:58.519 23:43:13 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.519 ************************************ 00:06:58.519 START TEST accel_assign_opcode 00:06:58.519 ************************************ 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1117 -- # accel_assign_opcode_test_suite 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:58.519 [2024-07-15 23:43:13.442201] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:58.519 [2024-07-15 23:43:13.450214] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:58.519 software 00:06:58.519 00:06:58.519 real 0m0.203s 00:06:58.519 user 0m0.048s 00:06:58.519 sys 0m0.008s 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:58.519 23:43:13 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:58.519 ************************************ 00:06:58.519 END TEST accel_assign_opcode 00:06:58.519 ************************************ 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@1136 -- # return 0 00:06:58.519 23:43:13 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 252748 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@942 -- # '[' -z 252748 ']' 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@946 -- # kill -0 252748 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@947 -- # uname 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:58.519 23:43:13 accel_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 252748 00:06:58.778 23:43:13 accel_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:58.779 23:43:13 accel_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:58.779 23:43:13 accel_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 252748' 00:06:58.779 killing process with pid 252748 00:06:58.779 23:43:13 accel_rpc -- common/autotest_common.sh@961 -- # kill 252748 00:06:58.779 23:43:13 accel_rpc -- common/autotest_common.sh@966 -- # wait 252748 00:06:58.779 00:06:58.779 real 0m1.463s 00:06:58.779 user 0m1.528s 00:06:58.779 sys 0m0.428s 00:06:58.779 23:43:13 accel_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:58.779 23:43:13 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.779 ************************************ 00:06:58.779 END TEST accel_rpc 00:06:58.779 ************************************ 00:06:59.039 23:43:13 -- common/autotest_common.sh@1136 -- # return 0 00:06:59.039 23:43:13 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.039 23:43:13 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:59.039 23:43:13 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:59.039 23:43:13 -- common/autotest_common.sh@10 -- # set +x 00:06:59.039 ************************************ 00:06:59.039 START TEST app_cmdline 00:06:59.039 ************************************ 00:06:59.039 23:43:14 app_cmdline -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:59.039 * Looking for test storage... 00:06:59.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:59.039 23:43:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:59.039 23:43:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=253158 00:06:59.039 23:43:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 253158 00:06:59.039 23:43:14 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:59.039 23:43:14 app_cmdline -- common/autotest_common.sh@823 -- # '[' -z 253158 ']' 00:06:59.039 23:43:14 app_cmdline -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.039 23:43:14 app_cmdline -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:59.039 23:43:14 app_cmdline -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.039 23:43:14 app_cmdline -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:59.039 23:43:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.039 [2024-07-15 23:43:14.172743] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:06:59.039 [2024-07-15 23:43:14.172799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253158 ] 00:06:59.300 [2024-07-15 23:43:14.242408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.300 [2024-07-15 23:43:14.311572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.870 23:43:14 app_cmdline -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:59.870 23:43:14 app_cmdline -- common/autotest_common.sh@856 -- # return 0 00:06:59.870 23:43:14 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:00.130 { 00:07:00.130 "version": "SPDK v24.09-pre git sha1 a83ad116a", 00:07:00.130 "fields": { 00:07:00.130 "major": 24, 00:07:00.130 "minor": 9, 00:07:00.130 "patch": 0, 00:07:00.130 "suffix": "-pre", 00:07:00.130 "commit": "a83ad116a" 00:07:00.130 } 00:07:00.130 } 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:00.130 23:43:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@642 -- # local es=0 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:00.130 23:43:15 app_cmdline -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:00.130 request: 00:07:00.130 { 00:07:00.130 "method": "env_dpdk_get_mem_stats", 00:07:00.130 "req_id": 1 00:07:00.130 } 00:07:00.130 Got JSON-RPC error response 00:07:00.130 response: 00:07:00.130 { 00:07:00.130 "code": -32601, 00:07:00.131 "message": "Method not found" 00:07:00.131 } 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@645 -- # es=1 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:07:00.390 23:43:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 253158 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@942 -- # '[' -z 253158 ']' 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@946 -- # kill -0 253158 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@947 -- # uname 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 253158 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@960 -- # echo 'killing process with pid 253158' 00:07:00.390 killing process with pid 253158 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@961 -- # kill 253158 00:07:00.390 23:43:15 app_cmdline -- common/autotest_common.sh@966 -- # wait 253158 00:07:00.650 00:07:00.650 real 0m1.580s 00:07:00.650 user 0m1.897s 00:07:00.650 sys 0m0.411s 00:07:00.650 23:43:15 app_cmdline -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:00.650 23:43:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.650 ************************************ 00:07:00.650 END TEST app_cmdline 00:07:00.650 ************************************ 00:07:00.650 23:43:15 -- common/autotest_common.sh@1136 -- # return 0 00:07:00.650 23:43:15 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:00.650 23:43:15 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:07:00.650 23:43:15 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:00.650 23:43:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.650 ************************************ 00:07:00.650 START TEST version 00:07:00.650 ************************************ 00:07:00.650 23:43:15 version -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:00.650 * Looking for test storage... 00:07:00.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:00.650 23:43:15 version -- app/version.sh@17 -- # get_header_version major 00:07:00.650 23:43:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.650 23:43:15 version -- app/version.sh@14 -- # cut -f2 00:07:00.650 23:43:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.650 23:43:15 version -- app/version.sh@17 -- # major=24 00:07:00.650 23:43:15 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.650 23:43:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.650 23:43:15 version -- app/version.sh@14 -- # cut -f2 00:07:00.650 23:43:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.650 23:43:15 version -- app/version.sh@18 -- # minor=9 00:07:00.650 23:43:15 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.650 23:43:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.650 23:43:15 version -- app/version.sh@14 -- # cut -f2 00:07:00.650 23:43:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.650 23:43:15 version -- app/version.sh@19 -- # patch=0 00:07:00.650 23:43:15 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.650 23:43:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:00.650 23:43:15 version -- app/version.sh@14 -- # cut -f2 00:07:00.650 23:43:15 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.651 23:43:15 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.651 23:43:15 version -- app/version.sh@22 -- # version=24.9 00:07:00.651 23:43:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.651 23:43:15 version -- app/version.sh@28 -- # version=24.9rc0 00:07:00.651 23:43:15 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:00.651 23:43:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.912 23:43:15 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:00.912 23:43:15 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:00.912 00:07:00.912 real 0m0.179s 00:07:00.912 user 0m0.094s 00:07:00.912 sys 0m0.124s 00:07:00.912 23:43:15 version -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:00.912 23:43:15 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.912 ************************************ 00:07:00.912 END TEST version 00:07:00.912 ************************************ 00:07:00.912 23:43:15 -- common/autotest_common.sh@1136 -- # return 0 00:07:00.912 23:43:15 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:00.912 23:43:15 -- spdk/autotest.sh@198 -- # uname -s 00:07:00.912 23:43:15 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:00.912 23:43:15 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:00.912 23:43:15 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:00.912 23:43:15 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:00.912 23:43:15 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:00.912 23:43:15 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:00.912 23:43:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:00.912 23:43:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.912 23:43:15 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:00.912 23:43:15 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:00.912 23:43:15 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:00.912 23:43:15 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:00.912 23:43:15 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:00.912 23:43:15 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:00.912 23:43:15 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:00.912 23:43:15 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:00.912 23:43:15 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:00.912 23:43:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.912 ************************************ 00:07:00.912 START TEST nvmf_tcp 00:07:00.912 ************************************ 00:07:00.912 23:43:15 nvmf_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:00.912 * Looking for test storage... 00:07:00.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.912 23:43:16 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.912 23:43:16 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.912 23:43:16 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.912 23:43:16 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.912 23:43:16 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.912 23:43:16 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.912 23:43:16 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:00.912 23:43:16 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.912 23:43:16 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.173 23:43:16 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.173 23:43:16 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.173 23:43:16 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.173 23:43:16 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.173 23:43:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:01.173 23:43:16 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:01.173 23:43:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:01.173 23:43:16 nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:01.173 23:43:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.173 23:43:16 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:01.173 23:43:16 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:01.173 23:43:16 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:01.173 23:43:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:01.173 23:43:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.173 ************************************ 00:07:01.173 START TEST nvmf_example 00:07:01.173 ************************************ 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:01.173 * Looking for test storage... 00:07:01.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.173 23:43:16 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:01.174 23:43:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.315 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:09.316 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:09.316 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:09.316 Found net devices under 0000:31:00.0: cvl_0_0 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:09.316 Found net devices under 0000:31:00.1: cvl_0_1 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:09.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:07:09.316 00:07:09.316 --- 10.0.0.2 ping statistics --- 00:07:09.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.316 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.507 ms 00:07:09.316 00:07:09.316 --- 10.0.0.1 ping statistics --- 00:07:09.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.316 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=257938 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 257938 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@823 -- # '[' -z 257938 ']' 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:09.316 23:43:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # return 0 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:10.258 23:43:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:22.610 Initializing NVMe Controllers 00:07:22.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:22.610 Initialization complete. Launching workers. 00:07:22.610 ======================================================== 00:07:22.610 Latency(us) 00:07:22.610 Device Information : IOPS MiB/s Average min max 00:07:22.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17989.94 70.27 3557.25 862.31 15987.99 00:07:22.610 ======================================================== 00:07:22.610 Total : 17989.94 70.27 3557.25 862.31 15987.99 00:07:22.610 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:22.610 rmmod nvme_tcp 00:07:22.610 rmmod nvme_fabrics 00:07:22.610 rmmod nvme_keyring 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 257938 ']' 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 257938 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@942 -- # '[' -z 257938 ']' 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # kill -0 257938 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # uname 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 257938 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # process_name=nvmf 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' nvmf = sudo ']' 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@960 -- # echo 'killing process with pid 257938' 00:07:22.610 killing process with pid 257938 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@961 -- # kill 257938 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # wait 257938 00:07:22.610 nvmf threads initialize successfully 00:07:22.610 bdev subsystem init successfully 00:07:22.610 created a nvmf target service 00:07:22.610 create targets's poll groups done 00:07:22.610 all subsystems of target started 00:07:22.610 nvmf target is running 00:07:22.610 all subsystems of target stopped 00:07:22.610 destroy targets's poll groups done 00:07:22.610 destroyed the nvmf target service 00:07:22.610 bdev subsystem finish successfully 00:07:22.610 nvmf threads destroy successfully 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.610 23:43:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.892 23:43:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:22.892 23:43:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:22.892 23:43:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.892 23:43:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 00:07:22.892 real 0m21.832s 00:07:22.892 user 0m46.807s 00:07:22.892 sys 0m6.936s 00:07:22.892 23:43:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:22.892 23:43:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 ************************************ 00:07:22.892 END TEST nvmf_example 00:07:22.892 ************************************ 00:07:22.892 23:43:38 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:22.892 23:43:38 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:22.892 23:43:38 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:22.892 23:43:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:22.892 23:43:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.892 ************************************ 00:07:22.892 START TEST nvmf_filesystem 00:07:22.892 ************************************ 00:07:22.892 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:23.155 * Looking for test storage... 00:07:23.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:23.155 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:23.156 #define SPDK_CONFIG_H 00:07:23.156 #define SPDK_CONFIG_APPS 1 00:07:23.156 #define SPDK_CONFIG_ARCH native 00:07:23.156 #undef SPDK_CONFIG_ASAN 00:07:23.156 #undef SPDK_CONFIG_AVAHI 00:07:23.156 #undef SPDK_CONFIG_CET 00:07:23.156 #define SPDK_CONFIG_COVERAGE 1 00:07:23.156 #define SPDK_CONFIG_CROSS_PREFIX 00:07:23.156 #undef SPDK_CONFIG_CRYPTO 00:07:23.156 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:23.156 #undef SPDK_CONFIG_CUSTOMOCF 00:07:23.156 #undef SPDK_CONFIG_DAOS 00:07:23.156 #define SPDK_CONFIG_DAOS_DIR 00:07:23.156 #define SPDK_CONFIG_DEBUG 1 00:07:23.156 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:23.156 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:23.156 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:23.156 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:23.156 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:23.156 #undef SPDK_CONFIG_DPDK_UADK 00:07:23.156 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:23.156 #define SPDK_CONFIG_EXAMPLES 1 00:07:23.156 #undef SPDK_CONFIG_FC 00:07:23.156 #define SPDK_CONFIG_FC_PATH 00:07:23.156 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:23.156 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:23.156 #undef SPDK_CONFIG_FUSE 00:07:23.156 #undef SPDK_CONFIG_FUZZER 00:07:23.156 #define SPDK_CONFIG_FUZZER_LIB 00:07:23.156 #undef SPDK_CONFIG_GOLANG 00:07:23.156 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:23.156 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:23.156 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:23.156 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:23.156 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:23.156 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:23.156 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:23.156 #define SPDK_CONFIG_IDXD 1 00:07:23.156 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:23.156 #undef SPDK_CONFIG_IPSEC_MB 00:07:23.156 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:23.156 #define SPDK_CONFIG_ISAL 1 00:07:23.156 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:23.156 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:23.156 #define SPDK_CONFIG_LIBDIR 00:07:23.156 #undef SPDK_CONFIG_LTO 00:07:23.156 #define SPDK_CONFIG_MAX_LCORES 128 00:07:23.156 #define SPDK_CONFIG_NVME_CUSE 1 00:07:23.156 #undef SPDK_CONFIG_OCF 00:07:23.156 #define SPDK_CONFIG_OCF_PATH 00:07:23.156 #define SPDK_CONFIG_OPENSSL_PATH 00:07:23.156 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:23.156 #define SPDK_CONFIG_PGO_DIR 00:07:23.156 #undef SPDK_CONFIG_PGO_USE 00:07:23.156 #define SPDK_CONFIG_PREFIX /usr/local 00:07:23.156 #undef SPDK_CONFIG_RAID5F 00:07:23.156 #undef SPDK_CONFIG_RBD 00:07:23.156 #define SPDK_CONFIG_RDMA 1 00:07:23.156 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:23.156 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:23.156 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:23.156 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:23.156 #define SPDK_CONFIG_SHARED 1 00:07:23.156 #undef SPDK_CONFIG_SMA 00:07:23.156 #define SPDK_CONFIG_TESTS 1 00:07:23.156 #undef SPDK_CONFIG_TSAN 00:07:23.156 #define SPDK_CONFIG_UBLK 1 00:07:23.156 #define SPDK_CONFIG_UBSAN 1 00:07:23.156 #undef SPDK_CONFIG_UNIT_TESTS 00:07:23.156 #undef SPDK_CONFIG_URING 00:07:23.156 #define SPDK_CONFIG_URING_PATH 00:07:23.156 #undef SPDK_CONFIG_URING_ZNS 00:07:23.156 #undef SPDK_CONFIG_USDT 00:07:23.156 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:23.156 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:23.156 #define SPDK_CONFIG_VFIO_USER 1 00:07:23.156 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:23.156 #define SPDK_CONFIG_VHOST 1 00:07:23.156 #define SPDK_CONFIG_VIRTIO 1 00:07:23.156 #undef SPDK_CONFIG_VTUNE 00:07:23.156 #define SPDK_CONFIG_VTUNE_DIR 00:07:23.156 #define SPDK_CONFIG_WERROR 1 00:07:23.156 #define SPDK_CONFIG_WPDK_DIR 00:07:23.156 #undef SPDK_CONFIG_XNVME 00:07:23.156 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:23.156 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:23.157 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@273 -- # MAKE=make 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@274 -- # MAKEFLAGS=-j144 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@290 -- # export HUGEMEM=4096 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@290 -- # HUGEMEM=4096 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@292 -- # NO_HUGE=() 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@293 -- # TEST_MODE= 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@294 -- # for i in "$@" 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # case "$i" in 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # TEST_TRANSPORT=tcp 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@312 -- # [[ -z 260744 ]] 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@312 -- # kill -0 260744 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1674 -- # set_test_storage 2147483648 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@322 -- # [[ -v testdir ]] 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@324 -- # local requested_size=2147483648 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@325 -- # local mount target_dir 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # local -A mounts fss sizes avails uses 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # local source fs size avail mount use 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local storage_fallback storage_candidates 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # mktemp -udt spdk.XXXXXX 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # storage_fallback=/tmp/spdk.z8rST0 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.z8rST0/tests/target /tmp/spdk.z8rST0 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@352 -- # requested_size=2214592512 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@321 -- # df -T 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@321 -- # grep -v Filesystem 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_devtmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=devtmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=67108864 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=67108864 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=0 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=/dev/pmem0 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=ext2 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=956157952 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=5284429824 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4328271872 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_root 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=overlay 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=122740920320 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=129370980352 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=6630060032 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=64680779776 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=64685490176 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4710400 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=25864253440 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=25874198528 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=9945088 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=efivarfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=efivarfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=179200 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=507904 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=324608 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=64683593728 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=64685490176 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=1896448 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=12937093120 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=12937097216 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4096 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # printf '* Looking for test storage...\n' 00:07:23.158 * Looking for test storage... 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # local target_space new_size 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # for target_dir in "${storage_candidates[@]}" 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.158 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # mount=/ 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # target_space=122740920320 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # (( target_space == 0 || target_space < requested_size )) 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # (( target_space >= requested_size )) 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == tmpfs ]] 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == ramfs ]] 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ / == / ]] 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # new_size=8844652544 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@376 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@383 -- # return 0 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set -o errtrace 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # shopt -s extdebug 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # true 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # xtrace_fd 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.159 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.419 23:43:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:23.419 23:43:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:23.419 23:43:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:23.419 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.419 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.419 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.419 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.419 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.419 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.420 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.420 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.420 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.420 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.420 23:43:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.420 23:43:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:31.555 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.555 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:31.555 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:31.556 Found net devices under 0000:31:00.0: cvl_0_0 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:31.556 Found net devices under 0000:31:00.1: cvl_0_1 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:07:31.556 00:07:31.556 --- 10.0.0.2 ping statistics --- 00:07:31.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.556 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:07:31.556 00:07:31.556 --- 10.0.0.1 ping statistics --- 00:07:31.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.556 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 ************************************ 00:07:31.556 START TEST nvmf_filesystem_no_in_capsule 00:07:31.556 ************************************ 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 0 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=265048 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 265048 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 265048 ']' 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:31.556 23:43:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.818 [2024-07-15 23:43:46.769564] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:07:31.818 [2024-07-15 23:43:46.769610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.818 [2024-07-15 23:43:46.846502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.818 [2024-07-15 23:43:46.918321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.818 [2024-07-15 23:43:46.918360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.818 [2024-07-15 23:43:46.918368] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.818 [2024-07-15 23:43:46.918374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.818 [2024-07-15 23:43:46.918380] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.818 [2024-07-15 23:43:46.918451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.818 [2024-07-15 23:43:46.918565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.818 [2024-07-15 23:43:46.918721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.818 [2024-07-15 23:43:46.918722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.390 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:32.390 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:07:32.390 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.390 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:32.390 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.650 [2024-07-15 23:43:47.588865] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.650 Malloc1 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.650 [2024-07-15 23:43:47.719767] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:07:32.650 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:07:32.651 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:32.651 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:32.651 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.651 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:32.651 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:07:32.651 { 00:07:32.651 "name": "Malloc1", 00:07:32.651 "aliases": [ 00:07:32.651 "05ef8201-0e3f-421d-8179-cb1fb22c11d4" 00:07:32.651 ], 00:07:32.651 "product_name": "Malloc disk", 00:07:32.651 "block_size": 512, 00:07:32.651 "num_blocks": 1048576, 00:07:32.651 "uuid": "05ef8201-0e3f-421d-8179-cb1fb22c11d4", 00:07:32.651 "assigned_rate_limits": { 00:07:32.651 "rw_ios_per_sec": 0, 00:07:32.651 "rw_mbytes_per_sec": 0, 00:07:32.651 "r_mbytes_per_sec": 0, 00:07:32.651 "w_mbytes_per_sec": 0 00:07:32.651 }, 00:07:32.651 "claimed": true, 00:07:32.651 "claim_type": "exclusive_write", 00:07:32.651 "zoned": false, 00:07:32.651 "supported_io_types": { 00:07:32.651 "read": true, 00:07:32.651 "write": true, 00:07:32.651 "unmap": true, 00:07:32.651 "flush": true, 00:07:32.651 "reset": true, 00:07:32.651 "nvme_admin": false, 00:07:32.651 "nvme_io": false, 00:07:32.651 "nvme_io_md": false, 00:07:32.651 "write_zeroes": true, 00:07:32.651 "zcopy": true, 00:07:32.651 "get_zone_info": false, 00:07:32.651 "zone_management": false, 00:07:32.651 "zone_append": false, 00:07:32.651 "compare": false, 00:07:32.651 "compare_and_write": false, 00:07:32.651 "abort": true, 00:07:32.651 "seek_hole": false, 00:07:32.651 "seek_data": false, 00:07:32.651 "copy": true, 00:07:32.651 "nvme_iov_md": false 00:07:32.651 }, 00:07:32.651 "memory_domains": [ 00:07:32.651 { 00:07:32.651 "dma_device_id": "system", 00:07:32.651 "dma_device_type": 1 00:07:32.651 }, 00:07:32.651 { 00:07:32.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.651 "dma_device_type": 2 00:07:32.651 } 00:07:32.651 ], 00:07:32.651 "driver_specific": {} 00:07:32.651 } 00:07:32.651 ]' 00:07:32.651 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:07:32.651 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:07:32.651 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:07:32.911 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:07:32.911 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:07:32.911 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:07:32.911 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:32.912 23:43:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.296 23:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:34.296 23:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:07:34.296 23:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:34.296 23:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:34.296 23:43:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:07:36.205 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:36.205 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:36.205 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:36.465 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:36.725 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:36.985 23:43:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:37.925 23:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:37.925 23:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:37.925 23:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:37.925 23:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:37.925 23:43:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.925 ************************************ 00:07:37.925 START TEST filesystem_ext4 00:07:37.925 ************************************ 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@921 -- # local force 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:07:37.925 23:43:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:37.925 mke2fs 1.46.5 (30-Dec-2021) 00:07:37.925 Discarding device blocks: 0/522240 done 00:07:37.925 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:37.925 Filesystem UUID: 3384c906-183b-4d19-b18a-0f2a8e968024 00:07:37.925 Superblock backups stored on blocks: 00:07:37.925 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:37.925 00:07:37.925 Allocating group tables: 0/64 done 00:07:37.925 Writing inode tables: 0/64 done 00:07:38.865 Creating journal (8192 blocks): done 00:07:39.386 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:39.386 00:07:39.386 23:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # return 0 00:07:39.386 23:43:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.957 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.957 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:39.957 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.957 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:39.957 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:39.957 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 265048 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:40.218 00:07:40.218 real 0m2.160s 00:07:40.218 user 0m0.020s 00:07:40.218 sys 0m0.056s 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:40.218 ************************************ 00:07:40.218 END TEST filesystem_ext4 00:07:40.218 ************************************ 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.218 ************************************ 00:07:40.218 START TEST filesystem_btrfs 00:07:40.218 ************************************ 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@921 -- # local force 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:07:40.218 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:40.218 btrfs-progs v6.6.2 00:07:40.218 See https://btrfs.readthedocs.io for more information. 00:07:40.218 00:07:40.218 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:40.218 NOTE: several default settings have changed in version 5.15, please make sure 00:07:40.218 this does not affect your deployments: 00:07:40.218 - DUP for metadata (-m dup) 00:07:40.218 - enabled no-holes (-O no-holes) 00:07:40.218 - enabled free-space-tree (-R free-space-tree) 00:07:40.218 00:07:40.218 Label: (null) 00:07:40.218 UUID: 192cd462-9181-4ad6-ba08-eeb23e763605 00:07:40.218 Node size: 16384 00:07:40.218 Sector size: 4096 00:07:40.218 Filesystem size: 510.00MiB 00:07:40.218 Block group profiles: 00:07:40.218 Data: single 8.00MiB 00:07:40.218 Metadata: DUP 32.00MiB 00:07:40.218 System: DUP 8.00MiB 00:07:40.218 SSD detected: yes 00:07:40.218 Zoned device: no 00:07:40.218 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:40.218 Runtime features: free-space-tree 00:07:40.219 Checksum: crc32c 00:07:40.219 Number of devices: 1 00:07:40.219 Devices: 00:07:40.219 ID SIZE PATH 00:07:40.219 1 510.00MiB /dev/nvme0n1p1 00:07:40.219 00:07:40.219 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # return 0 00:07:40.219 23:43:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 265048 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.163 00:07:41.163 real 0m0.859s 00:07:41.163 user 0m0.013s 00:07:41.163 sys 0m0.074s 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.163 ************************************ 00:07:41.163 END TEST filesystem_btrfs 00:07:41.163 ************************************ 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.163 ************************************ 00:07:41.163 START TEST filesystem_xfs 00:07:41.163 ************************************ 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@920 -- # local i=0 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@921 -- # local force 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # force=-f 00:07:41.163 23:43:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:41.163 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:41.163 = sectsz=512 attr=2, projid32bit=1 00:07:41.163 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:41.163 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:41.163 data = bsize=4096 blocks=130560, imaxpct=25 00:07:41.163 = sunit=0 swidth=0 blks 00:07:41.163 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:41.163 log =internal log bsize=4096 blocks=16384, version=2 00:07:41.163 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:41.163 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:42.105 Discarding blocks...Done. 00:07:42.105 23:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # return 0 00:07:42.105 23:43:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 265048 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.019 00:07:44.019 real 0m2.976s 00:07:44.019 user 0m0.030s 00:07:44.019 sys 0m0.048s 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:44.019 ************************************ 00:07:44.019 END TEST filesystem_xfs 00:07:44.019 ************************************ 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:44.019 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:44.280 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:44.280 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:44.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 265048 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 265048 ']' 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # kill -0 265048 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # uname 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:44.281 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 265048 00:07:44.541 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:44.541 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:44.541 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 265048' 00:07:44.541 killing process with pid 265048 00:07:44.541 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@961 -- # kill 265048 00:07:44.541 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # wait 265048 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:44.803 00:07:44.803 real 0m13.044s 00:07:44.803 user 0m51.318s 00:07:44.803 sys 0m1.096s 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.803 ************************************ 00:07:44.803 END TEST nvmf_filesystem_no_in_capsule 00:07:44.803 ************************************ 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.803 ************************************ 00:07:44.803 START TEST nvmf_filesystem_in_capsule 00:07:44.803 ************************************ 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 4096 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=267955 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 267955 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 267955 ']' 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:44.803 23:43:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.803 [2024-07-15 23:43:59.888851] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:07:44.803 [2024-07-15 23:43:59.888895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.803 [2024-07-15 23:43:59.961428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.064 [2024-07-15 23:44:00.028850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.064 [2024-07-15 23:44:00.028889] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.064 [2024-07-15 23:44:00.028897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.064 [2024-07-15 23:44:00.028904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.064 [2024-07-15 23:44:00.028909] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.064 [2024-07-15 23:44:00.029047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.064 [2024-07-15 23:44:00.029160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.064 [2024-07-15 23:44:00.029314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.064 [2024-07-15 23:44:00.029314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.635 [2024-07-15 23:44:00.712932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.635 Malloc1 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:45.635 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.896 [2024-07-15 23:44:00.842603] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:07:45.896 { 00:07:45.896 "name": "Malloc1", 00:07:45.896 "aliases": [ 00:07:45.896 "02f6d9a3-4391-48e6-b97f-b89656eab587" 00:07:45.896 ], 00:07:45.896 "product_name": "Malloc disk", 00:07:45.896 "block_size": 512, 00:07:45.896 "num_blocks": 1048576, 00:07:45.896 "uuid": "02f6d9a3-4391-48e6-b97f-b89656eab587", 00:07:45.896 "assigned_rate_limits": { 00:07:45.896 "rw_ios_per_sec": 0, 00:07:45.896 "rw_mbytes_per_sec": 0, 00:07:45.896 "r_mbytes_per_sec": 0, 00:07:45.896 "w_mbytes_per_sec": 0 00:07:45.896 }, 00:07:45.896 "claimed": true, 00:07:45.896 "claim_type": "exclusive_write", 00:07:45.896 "zoned": false, 00:07:45.896 "supported_io_types": { 00:07:45.896 "read": true, 00:07:45.896 "write": true, 00:07:45.896 "unmap": true, 00:07:45.896 "flush": true, 00:07:45.896 "reset": true, 00:07:45.896 "nvme_admin": false, 00:07:45.896 "nvme_io": false, 00:07:45.896 "nvme_io_md": false, 00:07:45.896 "write_zeroes": true, 00:07:45.896 "zcopy": true, 00:07:45.896 "get_zone_info": false, 00:07:45.896 "zone_management": false, 00:07:45.896 "zone_append": false, 00:07:45.896 "compare": false, 00:07:45.896 "compare_and_write": false, 00:07:45.896 "abort": true, 00:07:45.896 "seek_hole": false, 00:07:45.896 "seek_data": false, 00:07:45.896 "copy": true, 00:07:45.896 "nvme_iov_md": false 00:07:45.896 }, 00:07:45.896 "memory_domains": [ 00:07:45.896 { 00:07:45.896 "dma_device_id": "system", 00:07:45.896 "dma_device_type": 1 00:07:45.896 }, 00:07:45.896 { 00:07:45.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.896 "dma_device_type": 2 00:07:45.896 } 00:07:45.896 ], 00:07:45.896 "driver_specific": {} 00:07:45.896 } 00:07:45.896 ]' 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:45.896 23:44:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:47.278 23:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:47.278 23:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:07:47.278 23:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:47.278 23:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:47.278 23:44:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:49.822 23:44:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:50.393 23:44:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:51.358 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:51.358 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:51.358 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:51.358 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:51.358 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.358 ************************************ 00:07:51.358 START TEST filesystem_in_capsule_ext4 00:07:51.358 ************************************ 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@921 -- # local force 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:07:51.359 23:44:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:51.359 mke2fs 1.46.5 (30-Dec-2021) 00:07:51.359 Discarding device blocks: 0/522240 done 00:07:51.359 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:51.359 Filesystem UUID: 65223a3b-cdba-4ade-a1d3-30b5c98b90b9 00:07:51.359 Superblock backups stored on blocks: 00:07:51.359 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:51.359 00:07:51.359 Allocating group tables: 0/64 done 00:07:51.359 Writing inode tables: 0/64 done 00:07:51.619 Creating journal (8192 blocks): done 00:07:52.139 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:52.139 00:07:52.139 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # return 0 00:07:52.139 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 267955 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.401 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.662 00:07:52.662 real 0m1.212s 00:07:52.662 user 0m0.022s 00:07:52.662 sys 0m0.049s 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:52.662 ************************************ 00:07:52.662 END TEST filesystem_in_capsule_ext4 00:07:52.662 ************************************ 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.662 ************************************ 00:07:52.662 START TEST filesystem_in_capsule_btrfs 00:07:52.662 ************************************ 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@921 -- # local force 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:07:52.662 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:52.923 btrfs-progs v6.6.2 00:07:52.923 See https://btrfs.readthedocs.io for more information. 00:07:52.923 00:07:52.923 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:52.923 NOTE: several default settings have changed in version 5.15, please make sure 00:07:52.923 this does not affect your deployments: 00:07:52.923 - DUP for metadata (-m dup) 00:07:52.923 - enabled no-holes (-O no-holes) 00:07:52.923 - enabled free-space-tree (-R free-space-tree) 00:07:52.923 00:07:52.923 Label: (null) 00:07:52.923 UUID: d14519d2-f902-475b-90a0-28b5d620df8d 00:07:52.923 Node size: 16384 00:07:52.923 Sector size: 4096 00:07:52.923 Filesystem size: 510.00MiB 00:07:52.923 Block group profiles: 00:07:52.923 Data: single 8.00MiB 00:07:52.923 Metadata: DUP 32.00MiB 00:07:52.923 System: DUP 8.00MiB 00:07:52.923 SSD detected: yes 00:07:52.923 Zoned device: no 00:07:52.923 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:52.923 Runtime features: free-space-tree 00:07:52.923 Checksum: crc32c 00:07:52.923 Number of devices: 1 00:07:52.923 Devices: 00:07:52.923 ID SIZE PATH 00:07:52.923 1 510.00MiB /dev/nvme0n1p1 00:07:52.923 00:07:52.923 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # return 0 00:07:52.924 23:44:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.924 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.924 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 267955 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.185 00:07:53.185 real 0m0.486s 00:07:53.185 user 0m0.023s 00:07:53.185 sys 0m0.067s 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:53.185 ************************************ 00:07:53.185 END TEST filesystem_in_capsule_btrfs 00:07:53.185 ************************************ 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.185 ************************************ 00:07:53.185 START TEST filesystem_in_capsule_xfs 00:07:53.185 ************************************ 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@920 -- # local i=0 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@921 -- # local force 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # force=-f 00:07:53.185 23:44:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:53.185 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:53.185 = sectsz=512 attr=2, projid32bit=1 00:07:53.185 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:53.185 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:53.185 data = bsize=4096 blocks=130560, imaxpct=25 00:07:53.185 = sunit=0 swidth=0 blks 00:07:53.185 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:53.185 log =internal log bsize=4096 blocks=16384, version=2 00:07:53.185 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:53.185 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:54.129 Discarding blocks...Done. 00:07:54.129 23:44:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # return 0 00:07:54.129 23:44:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 267955 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.039 00:07:56.039 real 0m2.708s 00:07:56.039 user 0m0.021s 00:07:56.039 sys 0m0.057s 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:56.039 ************************************ 00:07:56.039 END TEST filesystem_in_capsule_xfs 00:07:56.039 ************************************ 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:56.039 23:44:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:56.039 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:56.320 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 267955 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 267955 ']' 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # kill -0 267955 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # uname 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 267955 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 267955' 00:07:56.582 killing process with pid 267955 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@961 -- # kill 267955 00:07:56.582 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # wait 267955 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:56.842 00:07:56.842 real 0m12.092s 00:07:56.842 user 0m47.560s 00:07:56.842 sys 0m1.103s 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.842 ************************************ 00:07:56.842 END TEST nvmf_filesystem_in_capsule 00:07:56.842 ************************************ 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.842 23:44:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.842 rmmod nvme_tcp 00:07:56.842 rmmod nvme_fabrics 00:07:56.842 rmmod nvme_keyring 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.842 23:44:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.390 23:44:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:59.390 00:07:59.390 real 0m36.030s 00:07:59.390 user 1m41.389s 00:07:59.390 sys 0m8.518s 00:07:59.390 23:44:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:59.390 23:44:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.390 ************************************ 00:07:59.390 END TEST nvmf_filesystem 00:07:59.390 ************************************ 00:07:59.390 23:44:14 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:59.390 23:44:14 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:59.390 23:44:14 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:59.390 23:44:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:59.390 23:44:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.390 ************************************ 00:07:59.390 START TEST nvmf_target_discovery 00:07:59.390 ************************************ 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:59.390 * Looking for test storage... 00:07:59.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.390 23:44:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:59.391 23:44:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:07.634 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:07.634 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.634 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:07.635 Found net devices under 0000:31:00.0: cvl_0_0 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:07.635 Found net devices under 0000:31:00.1: cvl_0_1 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:08:07.635 00:08:07.635 --- 10.0.0.2 ping statistics --- 00:08:07.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.635 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:08:07.635 00:08:07.635 --- 10.0.0.1 ping statistics --- 00:08:07.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.635 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=275210 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 275210 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@823 -- # '[' -z 275210 ']' 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:07.635 23:44:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:07.635 [2024-07-15 23:44:22.438933] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:07.635 [2024-07-15 23:44:22.438997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.635 [2024-07-15 23:44:22.519129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.635 [2024-07-15 23:44:22.593270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.635 [2024-07-15 23:44:22.593311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.635 [2024-07-15 23:44:22.593319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.635 [2024-07-15 23:44:22.593326] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.635 [2024-07-15 23:44:22.593332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.635 [2024-07-15 23:44:22.593477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.635 [2024-07-15 23:44:22.593592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.635 [2024-07-15 23:44:22.593747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.635 [2024-07-15 23:44:22.593748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # return 0 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.206 [2024-07-15 23:44:23.270863] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.206 Null1 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.206 [2024-07-15 23:44:23.331179] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.206 Null2 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.206 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.207 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.207 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:08.207 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:08.207 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.207 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.207 Null3 00:08:08.207 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.466 Null4 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.466 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:08.466 00:08:08.466 Discovery Log Number of Records 6, Generation counter 6 00:08:08.466 =====Discovery Log Entry 0====== 00:08:08.466 trtype: tcp 00:08:08.466 adrfam: ipv4 00:08:08.466 subtype: current discovery subsystem 00:08:08.466 treq: not required 00:08:08.466 portid: 0 00:08:08.466 trsvcid: 4420 00:08:08.466 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:08.466 traddr: 10.0.0.2 00:08:08.466 eflags: explicit discovery connections, duplicate discovery information 00:08:08.466 sectype: none 00:08:08.466 =====Discovery Log Entry 1====== 00:08:08.466 trtype: tcp 00:08:08.466 adrfam: ipv4 00:08:08.466 subtype: nvme subsystem 00:08:08.466 treq: not required 00:08:08.466 portid: 0 00:08:08.466 trsvcid: 4420 00:08:08.466 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:08.466 traddr: 10.0.0.2 00:08:08.466 eflags: none 00:08:08.467 sectype: none 00:08:08.467 =====Discovery Log Entry 2====== 00:08:08.467 trtype: tcp 00:08:08.467 adrfam: ipv4 00:08:08.467 subtype: nvme subsystem 00:08:08.467 treq: not required 00:08:08.467 portid: 0 00:08:08.467 trsvcid: 4420 00:08:08.467 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:08.467 traddr: 10.0.0.2 00:08:08.467 eflags: none 00:08:08.467 sectype: none 00:08:08.467 =====Discovery Log Entry 3====== 00:08:08.467 trtype: tcp 00:08:08.467 adrfam: ipv4 00:08:08.467 subtype: nvme subsystem 00:08:08.467 treq: not required 00:08:08.467 portid: 0 00:08:08.467 trsvcid: 4420 00:08:08.467 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:08.467 traddr: 10.0.0.2 00:08:08.467 eflags: none 00:08:08.467 sectype: none 00:08:08.467 =====Discovery Log Entry 4====== 00:08:08.467 trtype: tcp 00:08:08.467 adrfam: ipv4 00:08:08.467 subtype: nvme subsystem 00:08:08.467 treq: not required 00:08:08.467 portid: 0 00:08:08.467 trsvcid: 4420 00:08:08.467 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:08.467 traddr: 10.0.0.2 00:08:08.467 eflags: none 00:08:08.467 sectype: none 00:08:08.467 =====Discovery Log Entry 5====== 00:08:08.467 trtype: tcp 00:08:08.467 adrfam: ipv4 00:08:08.467 subtype: discovery subsystem referral 00:08:08.467 treq: not required 00:08:08.467 portid: 0 00:08:08.467 trsvcid: 4430 00:08:08.467 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:08.467 traddr: 10.0.0.2 00:08:08.467 eflags: none 00:08:08.467 sectype: none 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:08.467 Perform nvmf subsystem discovery via RPC 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 [ 00:08:08.467 { 00:08:08.467 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:08.467 "subtype": "Discovery", 00:08:08.467 "listen_addresses": [ 00:08:08.467 { 00:08:08.467 "trtype": "TCP", 00:08:08.467 "adrfam": "IPv4", 00:08:08.467 "traddr": "10.0.0.2", 00:08:08.467 "trsvcid": "4420" 00:08:08.467 } 00:08:08.467 ], 00:08:08.467 "allow_any_host": true, 00:08:08.467 "hosts": [] 00:08:08.467 }, 00:08:08.467 { 00:08:08.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:08.467 "subtype": "NVMe", 00:08:08.467 "listen_addresses": [ 00:08:08.467 { 00:08:08.467 "trtype": "TCP", 00:08:08.467 "adrfam": "IPv4", 00:08:08.467 "traddr": "10.0.0.2", 00:08:08.467 "trsvcid": "4420" 00:08:08.467 } 00:08:08.467 ], 00:08:08.467 "allow_any_host": true, 00:08:08.467 "hosts": [], 00:08:08.467 "serial_number": "SPDK00000000000001", 00:08:08.467 "model_number": "SPDK bdev Controller", 00:08:08.467 "max_namespaces": 32, 00:08:08.467 "min_cntlid": 1, 00:08:08.467 "max_cntlid": 65519, 00:08:08.467 "namespaces": [ 00:08:08.467 { 00:08:08.467 "nsid": 1, 00:08:08.467 "bdev_name": "Null1", 00:08:08.467 "name": "Null1", 00:08:08.467 "nguid": "2FA257EB7A1D4E0BAB01A17AEE77AE0F", 00:08:08.467 "uuid": "2fa257eb-7a1d-4e0b-ab01-a17aee77ae0f" 00:08:08.467 } 00:08:08.467 ] 00:08:08.467 }, 00:08:08.467 { 00:08:08.467 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:08.467 "subtype": "NVMe", 00:08:08.467 "listen_addresses": [ 00:08:08.467 { 00:08:08.467 "trtype": "TCP", 00:08:08.467 "adrfam": "IPv4", 00:08:08.467 "traddr": "10.0.0.2", 00:08:08.467 "trsvcid": "4420" 00:08:08.467 } 00:08:08.467 ], 00:08:08.467 "allow_any_host": true, 00:08:08.467 "hosts": [], 00:08:08.467 "serial_number": "SPDK00000000000002", 00:08:08.467 "model_number": "SPDK bdev Controller", 00:08:08.467 "max_namespaces": 32, 00:08:08.467 "min_cntlid": 1, 00:08:08.467 "max_cntlid": 65519, 00:08:08.467 "namespaces": [ 00:08:08.467 { 00:08:08.467 "nsid": 1, 00:08:08.467 "bdev_name": "Null2", 00:08:08.467 "name": "Null2", 00:08:08.467 "nguid": "4AE3F2BF4CDF458A9E71BFC92B4D679B", 00:08:08.467 "uuid": "4ae3f2bf-4cdf-458a-9e71-bfc92b4d679b" 00:08:08.467 } 00:08:08.467 ] 00:08:08.467 }, 00:08:08.467 { 00:08:08.467 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:08.467 "subtype": "NVMe", 00:08:08.467 "listen_addresses": [ 00:08:08.467 { 00:08:08.467 "trtype": "TCP", 00:08:08.467 "adrfam": "IPv4", 00:08:08.467 "traddr": "10.0.0.2", 00:08:08.467 "trsvcid": "4420" 00:08:08.467 } 00:08:08.467 ], 00:08:08.467 "allow_any_host": true, 00:08:08.467 "hosts": [], 00:08:08.467 "serial_number": "SPDK00000000000003", 00:08:08.467 "model_number": "SPDK bdev Controller", 00:08:08.467 "max_namespaces": 32, 00:08:08.467 "min_cntlid": 1, 00:08:08.467 "max_cntlid": 65519, 00:08:08.467 "namespaces": [ 00:08:08.467 { 00:08:08.467 "nsid": 1, 00:08:08.467 "bdev_name": "Null3", 00:08:08.467 "name": "Null3", 00:08:08.467 "nguid": "EE8FF8BBED8E471D9F5AC35C75F544EB", 00:08:08.467 "uuid": "ee8ff8bb-ed8e-471d-9f5a-c35c75f544eb" 00:08:08.467 } 00:08:08.467 ] 00:08:08.467 }, 00:08:08.467 { 00:08:08.467 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:08.467 "subtype": "NVMe", 00:08:08.467 "listen_addresses": [ 00:08:08.467 { 00:08:08.467 "trtype": "TCP", 00:08:08.467 "adrfam": "IPv4", 00:08:08.467 "traddr": "10.0.0.2", 00:08:08.467 "trsvcid": "4420" 00:08:08.467 } 00:08:08.467 ], 00:08:08.467 "allow_any_host": true, 00:08:08.467 "hosts": [], 00:08:08.467 "serial_number": "SPDK00000000000004", 00:08:08.467 "model_number": "SPDK bdev Controller", 00:08:08.467 "max_namespaces": 32, 00:08:08.467 "min_cntlid": 1, 00:08:08.467 "max_cntlid": 65519, 00:08:08.467 "namespaces": [ 00:08:08.467 { 00:08:08.467 "nsid": 1, 00:08:08.467 "bdev_name": "Null4", 00:08:08.467 "name": "Null4", 00:08:08.467 "nguid": "507DBBB731574070AD615BC9BA6DABEB", 00:08:08.467 "uuid": "507dbbb7-3157-4070-ad61-5bc9ba6dabeb" 00:08:08.467 } 00:08:08.467 ] 00:08:08.467 } 00:08:08.467 ] 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.467 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.727 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.728 rmmod nvme_tcp 00:08:08.728 rmmod nvme_fabrics 00:08:08.728 rmmod nvme_keyring 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 275210 ']' 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 275210 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@942 -- # '[' -z 275210 ']' 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # kill -0 275210 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # uname 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 275210 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@960 -- # echo 'killing process with pid 275210' 00:08:08.728 killing process with pid 275210 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@961 -- # kill 275210 00:08:08.728 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # wait 275210 00:08:08.988 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.988 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.988 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.988 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.988 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.988 23:44:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.988 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.988 23:44:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.900 23:44:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.900 00:08:10.900 real 0m11.918s 00:08:10.900 user 0m8.019s 00:08:10.900 sys 0m6.379s 00:08:10.900 23:44:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:10.900 23:44:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:10.900 ************************************ 00:08:10.900 END TEST nvmf_target_discovery 00:08:10.900 ************************************ 00:08:11.162 23:44:26 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:11.162 23:44:26 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:11.162 23:44:26 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:11.162 23:44:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:11.162 23:44:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.162 ************************************ 00:08:11.162 START TEST nvmf_referrals 00:08:11.162 ************************************ 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:11.162 * Looking for test storage... 00:08:11.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.162 23:44:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:11.163 23:44:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:19.304 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:19.304 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:19.304 Found net devices under 0000:31:00.0: cvl_0_0 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:19.304 Found net devices under 0000:31:00.1: cvl_0_1 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.304 23:44:33 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:08:19.304 00:08:19.304 --- 10.0.0.2 ping statistics --- 00:08:19.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.304 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:08:19.304 00:08:19.304 --- 10.0.0.1 ping statistics --- 00:08:19.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.304 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=280248 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 280248 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@823 -- # '[' -z 280248 ']' 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:19.304 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.304 [2024-07-15 23:44:34.165793] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:19.304 [2024-07-15 23:44:34.165843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.304 [2024-07-15 23:44:34.238683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.304 [2024-07-15 23:44:34.303945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.304 [2024-07-15 23:44:34.303983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.304 [2024-07-15 23:44:34.303991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.304 [2024-07-15 23:44:34.303997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.304 [2024-07-15 23:44:34.304003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.304 [2024-07-15 23:44:34.304145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.304 [2024-07-15 23:44:34.304274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.304 [2024-07-15 23:44:34.304358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.304 [2024-07-15 23:44:34.304359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # return 0 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.874 [2024-07-15 23:44:34.984892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:19.874 23:44:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.874 [2024-07-15 23:44:35.001112] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.874 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.134 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:20.134 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:20.134 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.134 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.134 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.134 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.134 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.134 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.134 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.135 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.395 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.656 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.916 23:44:35 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.916 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:20.916 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:20.916 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.175 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.435 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.435 rmmod nvme_tcp 00:08:21.435 rmmod nvme_fabrics 00:08:21.435 rmmod nvme_keyring 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 280248 ']' 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 280248 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@942 -- # '[' -z 280248 ']' 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # kill -0 280248 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # uname 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 280248 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:21.695 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@960 -- # echo 'killing process with pid 280248' 00:08:21.695 killing process with pid 280248 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@961 -- # kill 280248 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # wait 280248 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.696 23:44:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.270 23:44:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.270 00:08:24.270 real 0m12.767s 00:08:24.270 user 0m13.055s 00:08:24.270 sys 0m6.412s 00:08:24.270 23:44:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:24.270 23:44:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.270 ************************************ 00:08:24.270 END TEST nvmf_referrals 00:08:24.270 ************************************ 00:08:24.270 23:44:38 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:24.270 23:44:38 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:24.270 23:44:38 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:24.270 23:44:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:24.270 23:44:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.270 ************************************ 00:08:24.270 START TEST nvmf_connect_disconnect 00:08:24.270 ************************************ 00:08:24.270 23:44:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:24.270 * Looking for test storage... 00:08:24.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.270 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.271 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.271 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.271 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.271 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.271 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.271 23:44:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:32.409 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:32.409 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.409 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:32.410 Found net devices under 0000:31:00.0: cvl_0_0 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:32.410 Found net devices under 0000:31:00.1: cvl_0_1 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:08:32.410 00:08:32.410 --- 10.0.0.2 ping statistics --- 00:08:32.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.410 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:08:32.410 00:08:32.410 --- 10.0.0.1 ping statistics --- 00:08:32.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.410 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=285428 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 285428 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@823 -- # '[' -z 285428 ']' 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:32.410 23:44:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.410 [2024-07-15 23:44:47.438283] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:08:32.410 [2024-07-15 23:44:47.438351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.410 [2024-07-15 23:44:47.518859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.410 [2024-07-15 23:44:47.593818] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.410 [2024-07-15 23:44:47.593857] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.410 [2024-07-15 23:44:47.593865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.410 [2024-07-15 23:44:47.593876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.410 [2024-07-15 23:44:47.593881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.410 [2024-07-15 23:44:47.594016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.410 [2024-07-15 23:44:47.594135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.410 [2024-07-15 23:44:47.594292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.410 [2024-07-15 23:44:47.594292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # return 0 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.352 [2024-07-15 23:44:48.273914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:33.352 [2024-07-15 23:44:48.333293] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:33.352 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:33.353 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:33.353 23:44:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:37.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:51.661 rmmod nvme_tcp 00:08:51.661 rmmod nvme_fabrics 00:08:51.661 rmmod nvme_keyring 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 285428 ']' 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 285428 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@942 -- # '[' -z 285428 ']' 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # kill -0 285428 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # uname 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 285428 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 285428' 00:08:51.661 killing process with pid 285428 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@961 -- # kill 285428 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # wait 285428 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.661 23:45:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.577 23:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.577 00:08:53.577 real 0m29.725s 00:08:53.577 user 1m18.149s 00:08:53.577 sys 0m7.274s 00:08:53.577 23:45:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:53.577 23:45:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:53.577 ************************************ 00:08:53.577 END TEST nvmf_connect_disconnect 00:08:53.577 ************************************ 00:08:53.577 23:45:08 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:53.577 23:45:08 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:53.577 23:45:08 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:53.577 23:45:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:53.577 23:45:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.839 ************************************ 00:08:53.839 START TEST nvmf_multitarget 00:08:53.839 ************************************ 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:53.839 * Looking for test storage... 00:08:53.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.839 23:45:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.840 23:45:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.840 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:53.840 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:53.840 23:45:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.840 23:45:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:02.088 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:02.088 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:02.088 Found net devices under 0000:31:00.0: cvl_0_0 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.088 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:02.089 Found net devices under 0000:31:00.1: cvl_0_1 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.089 23:45:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:09:02.089 00:09:02.089 --- 10.0.0.2 ping statistics --- 00:09:02.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.089 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:09:02.089 00:09:02.089 --- 10.0.0.1 ping statistics --- 00:09:02.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.089 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=294587 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 294587 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@823 -- # '[' -z 294587 ']' 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:02.089 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:02.089 [2024-07-15 23:45:17.104187] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:09:02.089 [2024-07-15 23:45:17.104263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.089 [2024-07-15 23:45:17.183935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.089 [2024-07-15 23:45:17.259829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.089 [2024-07-15 23:45:17.259868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.089 [2024-07-15 23:45:17.259876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.089 [2024-07-15 23:45:17.259882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.089 [2024-07-15 23:45:17.259888] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.089 [2024-07-15 23:45:17.260024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.089 [2024-07-15 23:45:17.260142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.089 [2024-07-15 23:45:17.260286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.089 [2024-07-15 23:45:17.260286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.031 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:03.031 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # return 0 00:09:03.031 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.031 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:03.031 23:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:03.031 23:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.031 23:45:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:03.031 23:45:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:03.031 23:45:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:03.031 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:03.031 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:03.031 "nvmf_tgt_1" 00:09:03.031 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:03.031 "nvmf_tgt_2" 00:09:03.291 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:03.291 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:03.291 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:03.291 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:03.291 true 00:09:03.291 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:03.552 true 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.552 rmmod nvme_tcp 00:09:03.552 rmmod nvme_fabrics 00:09:03.552 rmmod nvme_keyring 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 294587 ']' 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 294587 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@942 -- # '[' -z 294587 ']' 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # kill -0 294587 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # uname 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:03.552 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 294587 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@960 -- # echo 'killing process with pid 294587' 00:09:03.812 killing process with pid 294587 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@961 -- # kill 294587 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # wait 294587 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.812 23:45:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.354 23:45:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.354 00:09:06.354 real 0m12.163s 00:09:06.354 user 0m9.434s 00:09:06.354 sys 0m6.519s 00:09:06.354 23:45:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:06.354 23:45:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.354 ************************************ 00:09:06.354 END TEST nvmf_multitarget 00:09:06.354 ************************************ 00:09:06.354 23:45:20 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:06.354 23:45:20 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:06.354 23:45:20 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:06.354 23:45:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:06.354 23:45:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.354 ************************************ 00:09:06.354 START TEST nvmf_rpc 00:09:06.354 ************************************ 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:06.354 * Looking for test storage... 00:09:06.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.354 23:45:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.491 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:14.491 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:14.492 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:14.492 Found net devices under 0000:31:00.0: cvl_0_0 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:14.492 Found net devices under 0000:31:00.1: cvl_0_1 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:09:14.492 00:09:14.492 --- 10.0.0.2 ping statistics --- 00:09:14.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.492 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:09:14.492 00:09:14.492 --- 10.0.0.1 ping statistics --- 00:09:14.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.492 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=299775 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 299775 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@823 -- # '[' -z 299775 ']' 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:14.492 23:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.492 [2024-07-15 23:45:29.490129] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:09:14.492 [2024-07-15 23:45:29.490184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.492 [2024-07-15 23:45:29.564476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.492 [2024-07-15 23:45:29.629551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.492 [2024-07-15 23:45:29.629589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.492 [2024-07-15 23:45:29.629597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.492 [2024-07-15 23:45:29.629603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.492 [2024-07-15 23:45:29.629609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.492 [2024-07-15 23:45:29.629750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.492 [2024-07-15 23:45:29.629869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.492 [2024-07-15 23:45:29.630023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.492 [2024-07-15 23:45:29.630025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # return 0 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:15.434 "tick_rate": 2400000000, 00:09:15.434 "poll_groups": [ 00:09:15.434 { 00:09:15.434 "name": "nvmf_tgt_poll_group_000", 00:09:15.434 "admin_qpairs": 0, 00:09:15.434 "io_qpairs": 0, 00:09:15.434 "current_admin_qpairs": 0, 00:09:15.434 "current_io_qpairs": 0, 00:09:15.434 "pending_bdev_io": 0, 00:09:15.434 "completed_nvme_io": 0, 00:09:15.434 "transports": [] 00:09:15.434 }, 00:09:15.434 { 00:09:15.434 "name": "nvmf_tgt_poll_group_001", 00:09:15.434 "admin_qpairs": 0, 00:09:15.434 "io_qpairs": 0, 00:09:15.434 "current_admin_qpairs": 0, 00:09:15.434 "current_io_qpairs": 0, 00:09:15.434 "pending_bdev_io": 0, 00:09:15.434 "completed_nvme_io": 0, 00:09:15.434 "transports": [] 00:09:15.434 }, 00:09:15.434 { 00:09:15.434 "name": "nvmf_tgt_poll_group_002", 00:09:15.434 "admin_qpairs": 0, 00:09:15.434 "io_qpairs": 0, 00:09:15.434 "current_admin_qpairs": 0, 00:09:15.434 "current_io_qpairs": 0, 00:09:15.434 "pending_bdev_io": 0, 00:09:15.434 "completed_nvme_io": 0, 00:09:15.434 "transports": [] 00:09:15.434 }, 00:09:15.434 { 00:09:15.434 "name": "nvmf_tgt_poll_group_003", 00:09:15.434 "admin_qpairs": 0, 00:09:15.434 "io_qpairs": 0, 00:09:15.434 "current_admin_qpairs": 0, 00:09:15.434 "current_io_qpairs": 0, 00:09:15.434 "pending_bdev_io": 0, 00:09:15.434 "completed_nvme_io": 0, 00:09:15.434 "transports": [] 00:09:15.434 } 00:09:15.434 ] 00:09:15.434 }' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 [2024-07-15 23:45:30.427375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:15.434 "tick_rate": 2400000000, 00:09:15.434 "poll_groups": [ 00:09:15.434 { 00:09:15.434 "name": "nvmf_tgt_poll_group_000", 00:09:15.434 "admin_qpairs": 0, 00:09:15.434 "io_qpairs": 0, 00:09:15.434 "current_admin_qpairs": 0, 00:09:15.434 "current_io_qpairs": 0, 00:09:15.434 "pending_bdev_io": 0, 00:09:15.434 "completed_nvme_io": 0, 00:09:15.434 "transports": [ 00:09:15.434 { 00:09:15.434 "trtype": "TCP" 00:09:15.434 } 00:09:15.434 ] 00:09:15.434 }, 00:09:15.434 { 00:09:15.434 "name": "nvmf_tgt_poll_group_001", 00:09:15.434 "admin_qpairs": 0, 00:09:15.434 "io_qpairs": 0, 00:09:15.434 "current_admin_qpairs": 0, 00:09:15.434 "current_io_qpairs": 0, 00:09:15.434 "pending_bdev_io": 0, 00:09:15.434 "completed_nvme_io": 0, 00:09:15.434 "transports": [ 00:09:15.434 { 00:09:15.434 "trtype": "TCP" 00:09:15.434 } 00:09:15.434 ] 00:09:15.434 }, 00:09:15.434 { 00:09:15.434 "name": "nvmf_tgt_poll_group_002", 00:09:15.434 "admin_qpairs": 0, 00:09:15.434 "io_qpairs": 0, 00:09:15.434 "current_admin_qpairs": 0, 00:09:15.434 "current_io_qpairs": 0, 00:09:15.434 "pending_bdev_io": 0, 00:09:15.434 "completed_nvme_io": 0, 00:09:15.434 "transports": [ 00:09:15.434 { 00:09:15.434 "trtype": "TCP" 00:09:15.434 } 00:09:15.434 ] 00:09:15.434 }, 00:09:15.434 { 00:09:15.434 "name": "nvmf_tgt_poll_group_003", 00:09:15.434 "admin_qpairs": 0, 00:09:15.434 "io_qpairs": 0, 00:09:15.434 "current_admin_qpairs": 0, 00:09:15.434 "current_io_qpairs": 0, 00:09:15.434 "pending_bdev_io": 0, 00:09:15.434 "completed_nvme_io": 0, 00:09:15.434 "transports": [ 00:09:15.434 { 00:09:15.434 "trtype": "TCP" 00:09:15.434 } 00:09:15.434 ] 00:09:15.434 } 00:09:15.434 ] 00:09:15.434 }' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 Malloc1 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 [2024-07-15 23:45:30.615275] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:15.434 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:15.695 [2024-07-15 23:45:30.642001] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:15.695 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:15.695 could not add new controller: failed to write to nvme-fabrics device 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:15.695 23:45:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:17.077 23:45:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.077 23:45:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:17.077 23:45:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.077 23:45:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:17.077 23:45:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.620 [2024-07-15 23:45:34.377489] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:19.620 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:19.620 could not add new controller: failed to write to nvme-fabrics device 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:19.620 23:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.004 23:45:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.005 23:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:21.005 23:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.005 23:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:21.005 23:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:22.917 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:22.917 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:22.917 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.917 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:22.917 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.917 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.918 [2024-07-15 23:45:37.989656] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.918 23:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.918 23:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.918 23:45:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.918 23:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.918 23:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.918 23:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.918 23:45:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:24.299 23:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.299 23:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:24.299 23:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.299 23:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:24.299 23:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.842 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.843 [2024-07-15 23:45:41.658453] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.843 23:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.226 23:45:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:28.226 23:45:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:28.226 23:45:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.226 23:45:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:28.226 23:45:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.137 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.397 [2024-07-15 23:45:45.357496] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.397 23:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.782 23:45:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.782 23:45:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:31.782 23:45:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.782 23:45:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:31.782 23:45:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:34.325 23:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:34.325 23:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:34.325 23:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:34.325 23:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:34.325 23:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:34.326 23:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:34.326 23:45:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.326 23:45:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.326 23:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:34.326 23:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:34.326 23:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.326 [2024-07-15 23:45:49.061226] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.326 23:45:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.710 23:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:35.710 23:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:35.710 23:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:35.710 23:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:35.710 23:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:37.621 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:37.621 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:37.621 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:37.621 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:37.621 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:37.621 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:37.621 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.621 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.621 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.622 [2024-07-15 23:45:52.730010] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:37.622 23:45:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.533 23:45:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.533 23:45:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:39.533 23:45:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.533 23:45:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:39.533 23:45:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:41.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 [2024-07-15 23:45:56.496816] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 [2024-07-15 23:45:56.556960] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 [2024-07-15 23:45:56.621146] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.455 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 [2024-07-15 23:45:56.681342] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 [2024-07-15 23:45:56.737523] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:41.717 "tick_rate": 2400000000, 00:09:41.717 "poll_groups": [ 00:09:41.717 { 00:09:41.717 "name": "nvmf_tgt_poll_group_000", 00:09:41.717 "admin_qpairs": 0, 00:09:41.717 "io_qpairs": 224, 00:09:41.717 "current_admin_qpairs": 0, 00:09:41.717 "current_io_qpairs": 0, 00:09:41.717 "pending_bdev_io": 0, 00:09:41.717 "completed_nvme_io": 518, 00:09:41.717 "transports": [ 00:09:41.717 { 00:09:41.717 "trtype": "TCP" 00:09:41.717 } 00:09:41.717 ] 00:09:41.717 }, 00:09:41.717 { 00:09:41.717 "name": "nvmf_tgt_poll_group_001", 00:09:41.717 "admin_qpairs": 1, 00:09:41.717 "io_qpairs": 223, 00:09:41.717 "current_admin_qpairs": 0, 00:09:41.717 "current_io_qpairs": 0, 00:09:41.717 "pending_bdev_io": 0, 00:09:41.717 "completed_nvme_io": 224, 00:09:41.717 "transports": [ 00:09:41.717 { 00:09:41.717 "trtype": "TCP" 00:09:41.717 } 00:09:41.717 ] 00:09:41.717 }, 00:09:41.717 { 00:09:41.717 "name": "nvmf_tgt_poll_group_002", 00:09:41.717 "admin_qpairs": 6, 00:09:41.717 "io_qpairs": 218, 00:09:41.717 "current_admin_qpairs": 0, 00:09:41.717 "current_io_qpairs": 0, 00:09:41.717 "pending_bdev_io": 0, 00:09:41.717 "completed_nvme_io": 224, 00:09:41.717 "transports": [ 00:09:41.717 { 00:09:41.717 "trtype": "TCP" 00:09:41.717 } 00:09:41.717 ] 00:09:41.717 }, 00:09:41.717 { 00:09:41.717 "name": "nvmf_tgt_poll_group_003", 00:09:41.717 "admin_qpairs": 0, 00:09:41.717 "io_qpairs": 224, 00:09:41.717 "current_admin_qpairs": 0, 00:09:41.717 "current_io_qpairs": 0, 00:09:41.717 "pending_bdev_io": 0, 00:09:41.717 "completed_nvme_io": 273, 00:09:41.717 "transports": [ 00:09:41.717 { 00:09:41.717 "trtype": "TCP" 00:09:41.717 } 00:09:41.717 ] 00:09:41.717 } 00:09:41.717 ] 00:09:41.717 }' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.717 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:41.987 rmmod nvme_tcp 00:09:41.987 rmmod nvme_fabrics 00:09:41.987 rmmod nvme_keyring 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 299775 ']' 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 299775 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@942 -- # '[' -z 299775 ']' 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # kill -0 299775 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # uname 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:41.987 23:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 299775 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 299775' 00:09:41.987 killing process with pid 299775 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@961 -- # kill 299775 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # wait 299775 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.987 23:45:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.549 23:45:59 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.549 00:09:44.549 real 0m38.203s 00:09:44.549 user 1m52.288s 00:09:44.549 sys 0m7.782s 00:09:44.549 23:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:44.549 23:45:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.549 ************************************ 00:09:44.549 END TEST nvmf_rpc 00:09:44.550 ************************************ 00:09:44.550 23:45:59 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:44.550 23:45:59 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:44.550 23:45:59 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:44.550 23:45:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:44.550 23:45:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.550 ************************************ 00:09:44.550 START TEST nvmf_invalid 00:09:44.550 ************************************ 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:44.550 * Looking for test storage... 00:09:44.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.550 23:45:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:52.739 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:52.739 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:52.739 Found net devices under 0000:31:00.0: cvl_0_0 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:52.739 Found net devices under 0000:31:00.1: cvl_0_1 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:09:52.739 00:09:52.739 --- 10.0.0.2 ping statistics --- 00:09:52.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.739 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:09:52.739 00:09:52.739 --- 10.0.0.1 ping statistics --- 00:09:52.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.739 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:52.739 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=310018 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 310018 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@823 -- # '[' -z 310018 ']' 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:52.740 23:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:52.740 [2024-07-15 23:46:07.572682] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:09:52.740 [2024-07-15 23:46:07.572741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.740 [2024-07-15 23:46:07.648143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.740 [2024-07-15 23:46:07.714003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.740 [2024-07-15 23:46:07.714037] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.740 [2024-07-15 23:46:07.714045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.740 [2024-07-15 23:46:07.714051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.740 [2024-07-15 23:46:07.714057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.740 [2024-07-15 23:46:07.714189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.740 [2024-07-15 23:46:07.714313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.740 [2024-07-15 23:46:07.714419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.740 [2024-07-15 23:46:07.714420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.311 23:46:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:53.311 23:46:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # return 0 00:09:53.311 23:46:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.311 23:46:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.311 23:46:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:53.311 23:46:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.311 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:53.311 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3638 00:09:53.572 [2024-07-15 23:46:08.545242] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:53.572 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:53.572 { 00:09:53.572 "nqn": "nqn.2016-06.io.spdk:cnode3638", 00:09:53.572 "tgt_name": "foobar", 00:09:53.572 "method": "nvmf_create_subsystem", 00:09:53.572 "req_id": 1 00:09:53.572 } 00:09:53.572 Got JSON-RPC error response 00:09:53.572 response: 00:09:53.572 { 00:09:53.572 "code": -32603, 00:09:53.572 "message": "Unable to find target foobar" 00:09:53.572 }' 00:09:53.572 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:53.572 { 00:09:53.572 "nqn": "nqn.2016-06.io.spdk:cnode3638", 00:09:53.572 "tgt_name": "foobar", 00:09:53.572 "method": "nvmf_create_subsystem", 00:09:53.572 "req_id": 1 00:09:53.572 } 00:09:53.572 Got JSON-RPC error response 00:09:53.572 response: 00:09:53.572 { 00:09:53.572 "code": -32603, 00:09:53.572 "message": "Unable to find target foobar" 00:09:53.572 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:53.572 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:53.572 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23993 00:09:53.572 [2024-07-15 23:46:08.721878] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23993: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:53.572 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:53.572 { 00:09:53.572 "nqn": "nqn.2016-06.io.spdk:cnode23993", 00:09:53.572 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:53.572 "method": "nvmf_create_subsystem", 00:09:53.572 "req_id": 1 00:09:53.572 } 00:09:53.572 Got JSON-RPC error response 00:09:53.572 response: 00:09:53.572 { 00:09:53.572 "code": -32602, 00:09:53.572 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:53.572 }' 00:09:53.572 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:53.572 { 00:09:53.572 "nqn": "nqn.2016-06.io.spdk:cnode23993", 00:09:53.572 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:53.572 "method": "nvmf_create_subsystem", 00:09:53.572 "req_id": 1 00:09:53.572 } 00:09:53.572 Got JSON-RPC error response 00:09:53.572 response: 00:09:53.572 { 00:09:53.572 "code": -32602, 00:09:53.572 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:53.572 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:53.572 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:53.572 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26199 00:09:53.833 [2024-07-15 23:46:08.898378] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26199: invalid model number 'SPDK_Controller' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:53.833 { 00:09:53.833 "nqn": "nqn.2016-06.io.spdk:cnode26199", 00:09:53.833 "model_number": "SPDK_Controller\u001f", 00:09:53.833 "method": "nvmf_create_subsystem", 00:09:53.833 "req_id": 1 00:09:53.833 } 00:09:53.833 Got JSON-RPC error response 00:09:53.833 response: 00:09:53.833 { 00:09:53.833 "code": -32602, 00:09:53.833 "message": "Invalid MN SPDK_Controller\u001f" 00:09:53.833 }' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:53.833 { 00:09:53.833 "nqn": "nqn.2016-06.io.spdk:cnode26199", 00:09:53.833 "model_number": "SPDK_Controller\u001f", 00:09:53.833 "method": "nvmf_create_subsystem", 00:09:53.833 "req_id": 1 00:09:53.833 } 00:09:53.833 Got JSON-RPC error response 00:09:53.833 response: 00:09:53.833 { 00:09:53.833 "code": -32602, 00:09:53.833 "message": "Invalid MN SPDK_Controller\u001f" 00:09:53.833 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.833 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.834 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:53.834 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:53.834 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:53.834 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.834 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.834 23:46:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:53.834 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '.0v}KrNrIq\aqtXw'\''y|;u' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '.0v}KrNrIq\aqtXw'\''y|;u' nqn.2016-06.io.spdk:cnode28000 00:09:54.095 [2024-07-15 23:46:09.235470] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28000: invalid serial number '.0v}KrNrIq\aqtXw'y|;u' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:54.095 { 00:09:54.095 "nqn": "nqn.2016-06.io.spdk:cnode28000", 00:09:54.095 "serial_number": ".0v}KrNrIq\\aqtXw'\''y|;u", 00:09:54.095 "method": "nvmf_create_subsystem", 00:09:54.095 "req_id": 1 00:09:54.095 } 00:09:54.095 Got JSON-RPC error response 00:09:54.095 response: 00:09:54.095 { 00:09:54.095 "code": -32602, 00:09:54.095 "message": "Invalid SN .0v}KrNrIq\\aqtXw'\''y|;u" 00:09:54.095 }' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:54.095 { 00:09:54.095 "nqn": "nqn.2016-06.io.spdk:cnode28000", 00:09:54.095 "serial_number": ".0v}KrNrIq\\aqtXw'y|;u", 00:09:54.095 "method": "nvmf_create_subsystem", 00:09:54.095 "req_id": 1 00:09:54.095 } 00:09:54.095 Got JSON-RPC error response 00:09:54.095 response: 00:09:54.095 { 00:09:54.095 "code": -32602, 00:09:54.095 "message": "Invalid SN .0v}KrNrIq\\aqtXw'y|;u" 00:09:54.095 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.095 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:54.356 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:54.357 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:09:54.617 23:46:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '\n,];G4kQ /dev/null' 00:09:56.437 23:46:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.346 23:46:13 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.607 00:09:58.607 real 0m14.243s 00:09:58.607 user 0m19.455s 00:09:58.607 sys 0m6.853s 00:09:58.607 23:46:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:58.607 23:46:13 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:58.607 ************************************ 00:09:58.607 END TEST nvmf_invalid 00:09:58.607 ************************************ 00:09:58.607 23:46:13 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:58.607 23:46:13 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:58.607 23:46:13 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:58.607 23:46:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:58.607 23:46:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.607 ************************************ 00:09:58.607 START TEST nvmf_abort 00:09:58.607 ************************************ 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:58.607 * Looking for test storage... 00:09:58.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.607 23:46:13 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:58.608 23:46:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.750 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:06.751 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:06.751 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:06.751 Found net devices under 0000:31:00.0: cvl_0_0 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:06.751 Found net devices under 0000:31:00.1: cvl_0_1 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:10:06.751 00:10:06.751 --- 10.0.0.2 ping statistics --- 00:10:06.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.751 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:10:06.751 00:10:06.751 --- 10.0.0.1 ping statistics --- 00:10:06.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.751 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=315550 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 315550 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@823 -- # '[' -z 315550 ']' 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:06.751 23:46:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.013 [2024-07-15 23:46:21.984235] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:10:07.013 [2024-07-15 23:46:21.984299] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.013 [2024-07-15 23:46:22.079089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.013 [2024-07-15 23:46:22.174067] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.013 [2024-07-15 23:46:22.174123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.013 [2024-07-15 23:46:22.174132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.013 [2024-07-15 23:46:22.174138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.013 [2024-07-15 23:46:22.174144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.013 [2024-07-15 23:46:22.174280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.013 [2024-07-15 23:46:22.174476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.013 [2024-07-15 23:46:22.174477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.584 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:07.584 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # return 0 00:10:07.584 23:46:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.584 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:07.584 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.845 [2024-07-15 23:46:22.801074] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.845 Malloc0 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.845 Delay0 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.845 [2024-07-15 23:46:22.880732] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:07.845 23:46:22 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:07.845 [2024-07-15 23:46:22.959203] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:10.386 Initializing NVMe Controllers 00:10:10.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:10.386 controller IO queue size 128 less than required 00:10:10.386 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:10.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:10.386 Initialization complete. Launching workers. 00:10:10.386 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33936 00:10:10.386 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33997, failed to submit 62 00:10:10.386 success 33940, unsuccess 57, failed 0 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.386 rmmod nvme_tcp 00:10:10.386 rmmod nvme_fabrics 00:10:10.386 rmmod nvme_keyring 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 315550 ']' 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 315550 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@942 -- # '[' -z 315550 ']' 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # kill -0 315550 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # uname 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 315550 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@960 -- # echo 'killing process with pid 315550' 00:10:10.386 killing process with pid 315550 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@961 -- # kill 315550 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # wait 315550 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.386 23:46:25 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.297 23:46:27 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:12.297 00:10:12.297 real 0m13.724s 00:10:12.297 user 0m13.326s 00:10:12.297 sys 0m6.874s 00:10:12.297 23:46:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:12.297 23:46:27 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:12.297 ************************************ 00:10:12.297 END TEST nvmf_abort 00:10:12.297 ************************************ 00:10:12.297 23:46:27 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:12.297 23:46:27 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:12.297 23:46:27 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:12.297 23:46:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:12.297 23:46:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.297 ************************************ 00:10:12.297 START TEST nvmf_ns_hotplug_stress 00:10:12.297 ************************************ 00:10:12.297 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:12.559 * Looking for test storage... 00:10:12.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:12.559 23:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:20.704 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:20.704 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:20.704 Found net devices under 0000:31:00.0: cvl_0_0 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:20.704 Found net devices under 0000:31:00.1: cvl_0_1 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:20.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:10:20.704 00:10:20.704 --- 10.0.0.2 ping statistics --- 00:10:20.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.704 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:10:20.704 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:10:20.705 00:10:20.705 --- 10.0.0.1 ping statistics --- 00:10:20.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.705 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=320922 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 320922 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@823 -- # '[' -z 320922 ']' 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:20.705 23:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.705 [2024-07-15 23:46:35.732172] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:10:20.705 [2024-07-15 23:46:35.732246] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.705 [2024-07-15 23:46:35.827483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.966 [2024-07-15 23:46:35.920595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.966 [2024-07-15 23:46:35.920650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.966 [2024-07-15 23:46:35.920658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.966 [2024-07-15 23:46:35.920665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.966 [2024-07-15 23:46:35.920671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.966 [2024-07-15 23:46:35.920799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.966 [2024-07-15 23:46:35.920964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.966 [2024-07-15 23:46:35.920964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.537 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:21.537 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # return 0 00:10:21.537 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.537 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:21.537 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.537 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.537 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:21.537 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.537 [2024-07-15 23:46:36.678539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.537 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.798 23:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.058 [2024-07-15 23:46:37.011912] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.058 23:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:22.059 23:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:22.319 Malloc0 00:10:22.319 23:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:22.580 Delay0 00:10:22.580 23:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.580 23:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:22.841 NULL1 00:10:22.841 23:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:23.102 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=321536 00:10:23.102 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:23.102 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:23.102 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.102 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.362 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:23.362 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:23.623 true 00:10:23.623 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:23.623 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.623 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.884 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:23.884 23:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:24.145 true 00:10:24.145 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:24.145 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.145 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.406 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:24.406 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:24.666 true 00:10:24.666 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:24.666 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.666 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.927 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:24.927 23:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:25.188 true 00:10:25.188 23:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:25.188 23:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.188 23:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.448 23:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:25.448 23:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:25.708 true 00:10:25.708 23:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:25.708 23:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.708 23:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.969 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:25.969 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:26.228 true 00:10:26.229 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:26.229 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.229 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.489 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:26.489 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:26.750 true 00:10:26.750 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:26.750 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.750 23:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.011 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:27.011 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:27.271 true 00:10:27.271 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:27.271 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.271 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.533 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:27.533 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:27.794 true 00:10:27.794 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:27.794 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.794 23:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.054 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:28.054 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:28.313 true 00:10:28.313 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:28.313 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.313 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.572 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:28.572 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:28.832 true 00:10:28.832 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:28.832 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.832 23:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.092 23:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:29.092 23:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:29.352 true 00:10:29.352 23:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:29.352 23:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.352 23:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.612 23:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:29.612 23:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:29.873 true 00:10:29.873 23:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:29.873 23:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.873 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.133 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:30.133 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:30.394 true 00:10:30.394 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:30.394 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.394 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.655 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:30.655 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:30.915 true 00:10:30.915 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:30.915 23:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.915 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.226 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:31.226 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:31.226 true 00:10:31.533 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:31.533 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.533 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.793 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:31.793 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:31.793 true 00:10:31.793 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:31.793 23:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.052 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.312 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:32.312 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:32.312 true 00:10:32.312 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:32.312 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.573 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.833 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:32.833 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:32.833 true 00:10:32.833 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:32.833 23:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.092 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.351 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:33.351 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:33.351 true 00:10:33.351 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:33.352 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.610 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.870 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:33.870 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:33.870 true 00:10:33.870 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:33.870 23:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.129 23:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.405 23:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:34.405 23:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:34.405 true 00:10:34.405 23:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:34.405 23:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.668 23:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.668 23:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:34.668 23:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:34.928 true 00:10:34.928 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:34.928 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.188 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.188 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:35.188 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:35.448 true 00:10:35.448 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:35.448 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.708 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.708 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:35.708 23:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:35.969 true 00:10:35.969 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:35.969 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.229 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.229 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:36.229 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:36.489 true 00:10:36.489 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:36.489 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.749 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.749 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:36.749 23:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:37.027 true 00:10:37.027 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:37.027 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.287 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.287 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:37.287 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:37.546 true 00:10:37.546 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:37.546 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.806 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.806 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:37.806 23:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:38.067 true 00:10:38.067 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:38.067 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.327 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.327 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:38.327 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:38.587 true 00:10:38.587 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:38.587 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.847 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.847 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:38.847 23:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:39.108 true 00:10:39.108 23:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:39.108 23:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.369 23:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.369 23:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:39.369 23:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:39.630 true 00:10:39.630 23:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:39.630 23:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.890 23:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.890 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:39.890 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:40.151 true 00:10:40.151 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:40.151 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.411 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.411 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:40.411 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:40.671 true 00:10:40.671 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:40.671 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.931 23:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.931 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:40.931 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:41.191 true 00:10:41.191 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:41.191 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.192 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.452 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:41.452 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:41.712 true 00:10:41.712 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:41.712 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.712 23:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.972 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:41.972 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:42.233 true 00:10:42.233 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:42.233 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.233 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.493 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:42.493 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:42.754 true 00:10:42.754 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:42.754 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.754 23:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.015 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:43.015 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:43.275 true 00:10:43.275 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:43.275 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.275 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.536 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:43.536 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:43.796 true 00:10:43.796 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:43.796 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.796 23:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.057 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:44.057 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:44.317 true 00:10:44.317 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:44.317 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.317 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.577 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:44.577 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:44.838 true 00:10:44.838 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:44.838 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.838 23:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.098 23:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:45.098 23:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:45.359 true 00:10:45.359 23:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:45.359 23:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.359 23:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.619 23:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:45.619 23:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:45.879 true 00:10:45.879 23:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:45.879 23:47:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.879 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.139 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:46.139 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:46.400 true 00:10:46.400 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:46.400 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.400 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.661 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:46.661 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:46.661 true 00:10:46.922 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:46.922 23:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.922 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.182 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:47.182 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:47.442 true 00:10:47.442 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:47.442 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.442 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.703 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:47.703 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:47.703 true 00:10:47.964 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:47.964 23:47:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.964 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.224 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:48.224 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:48.224 true 00:10:48.485 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:48.485 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.485 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.746 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:48.746 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:48.746 true 00:10:49.007 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:49.007 23:47:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.007 23:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.267 23:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:49.267 23:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:49.527 true 00:10:49.527 23:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:49.527 23:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.527 23:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.788 23:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:49.788 23:47:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:49.788 true 00:10:50.049 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:50.049 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.049 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.310 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:50.310 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:50.570 true 00:10:50.570 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:50.570 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.570 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.830 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:10:50.830 23:47:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:10:50.830 true 00:10:51.090 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:51.090 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.090 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.350 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:10:51.350 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:10:51.350 true 00:10:51.610 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:51.610 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.610 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.870 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:10:51.870 23:47:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:10:51.870 true 00:10:52.128 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:52.128 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.128 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.387 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:10:52.387 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:10:52.685 true 00:10:52.685 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:52.685 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.685 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.946 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:10:52.946 23:47:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:10:52.946 true 00:10:53.207 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:53.207 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.207 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.207 Initializing NVMe Controllers 00:10:53.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:53.207 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:10:53.207 Controller IO queue size 128, less than required. 00:10:53.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:53.207 WARNING: Some requested NVMe devices were skipped 00:10:53.207 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:53.207 Initialization complete. Launching workers. 00:10:53.207 ======================================================== 00:10:53.207 Latency(us) 00:10:53.207 Device Information : IOPS MiB/s Average min max 00:10:53.207 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30978.31 15.13 4131.80 1620.37 10302.97 00:10:53.207 ======================================================== 00:10:53.207 Total : 30978.31 15.13 4131.80 1620.37 10302.97 00:10:53.207 00:10:53.467 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:10:53.467 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:10:53.467 true 00:10:53.726 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 321536 00:10:53.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (321536) - No such process 00:10:53.726 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 321536 00:10:53.726 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.727 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:53.986 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:53.986 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:53.986 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:53.986 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.986 23:47:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:53.986 null0 00:10:53.986 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:53.986 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:53.986 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:54.247 null1 00:10:54.247 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.247 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.247 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:54.508 null2 00:10:54.508 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.508 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.508 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:54.508 null3 00:10:54.508 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.508 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.508 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:54.768 null4 00:10:54.768 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:54.768 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:54.768 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:54.768 null5 00:10:55.029 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.029 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.029 23:47:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:55.029 null6 00:10:55.029 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.029 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.029 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:55.291 null7 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.291 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 328038 328040 328043 328046 328049 328052 328055 328058 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.292 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.553 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:55.814 23:47:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.075 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.336 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.597 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:56.858 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.859 23:47:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:56.859 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:56.859 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:56.859 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.121 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.382 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.643 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:57.904 23:47:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:57.904 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.165 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:58.454 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.745 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:58.745 rmmod nvme_tcp 00:10:58.745 rmmod nvme_fabrics 00:10:58.745 rmmod nvme_keyring 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 320922 ']' 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 320922 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@942 -- # '[' -z 320922 ']' 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # kill -0 320922 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # uname 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:59.006 23:47:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 320922 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 320922' 00:10:59.006 killing process with pid 320922 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@961 -- # kill 320922 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # wait 320922 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.006 23:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.554 23:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:01.554 00:11:01.554 real 0m48.765s 00:11:01.554 user 3m13.884s 00:11:01.554 sys 0m17.874s 00:11:01.554 23:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:01.554 23:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.554 ************************************ 00:11:01.554 END TEST nvmf_ns_hotplug_stress 00:11:01.554 ************************************ 00:11:01.554 23:47:16 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:01.554 23:47:16 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:01.554 23:47:16 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:01.554 23:47:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:01.554 23:47:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:01.554 ************************************ 00:11:01.554 START TEST nvmf_connect_stress 00:11:01.554 ************************************ 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:01.554 * Looking for test storage... 00:11:01.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.554 23:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:09.698 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:09.698 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:09.698 Found net devices under 0000:31:00.0: cvl_0_0 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:09.698 Found net devices under 0000:31:00.1: cvl_0_1 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:09.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:11:09.698 00:11:09.698 --- 10.0.0.2 ping statistics --- 00:11:09.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.698 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:11:09.698 00:11:09.698 --- 10.0.0.1 ping statistics --- 00:11:09.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.698 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:09.698 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=333667 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 333667 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@823 -- # '[' -z 333667 ']' 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:09.699 23:47:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.699 [2024-07-15 23:47:24.722560] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:11:09.699 [2024-07-15 23:47:24.722625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.699 [2024-07-15 23:47:24.818056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.959 [2024-07-15 23:47:24.892800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.959 [2024-07-15 23:47:24.892854] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.959 [2024-07-15 23:47:24.892862] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.959 [2024-07-15 23:47:24.892868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.959 [2024-07-15 23:47:24.892874] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.959 [2024-07-15 23:47:24.893003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.959 [2024-07-15 23:47:24.893162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.959 [2024-07-15 23:47:24.893163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # return 0 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.530 [2024-07-15 23:47:25.545180] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.530 [2024-07-15 23:47:25.579387] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.530 NULL1 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=333771 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:10.530 23:47:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.099 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:11.099 23:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:11.099 23:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.099 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:11.099 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.358 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:11.358 23:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:11.358 23:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.358 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:11.358 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.618 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:11.618 23:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:11.618 23:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.618 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:11.618 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.878 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:11.878 23:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:11.878 23:47:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.878 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:11.878 23:47:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.139 23:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:12.139 23:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:12.139 23:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.139 23:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:12.139 23:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.710 23:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:12.710 23:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:12.710 23:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.710 23:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:12.710 23:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.970 23:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:12.970 23:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:12.970 23:47:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.970 23:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:12.970 23:47:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.230 23:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:13.230 23:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:13.230 23:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.230 23:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:13.230 23:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.489 23:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:13.489 23:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:13.489 23:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.489 23:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:13.490 23:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.059 23:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:14.059 23:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:14.059 23:47:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.059 23:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:14.059 23:47:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.319 23:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:14.319 23:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:14.319 23:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.319 23:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:14.319 23:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.578 23:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:14.578 23:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:14.578 23:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.578 23:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:14.578 23:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.838 23:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:14.838 23:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:14.838 23:47:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.838 23:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:14.838 23:47:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.096 23:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:15.096 23:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:15.096 23:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.096 23:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:15.096 23:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.663 23:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:15.663 23:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:15.663 23:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.663 23:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:15.663 23:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.922 23:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:15.922 23:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:15.922 23:47:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.922 23:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:15.922 23:47:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.182 23:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:16.182 23:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:16.182 23:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.182 23:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:16.182 23:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.441 23:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:16.441 23:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:16.441 23:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.441 23:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:16.441 23:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.700 23:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:16.700 23:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:16.700 23:47:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.700 23:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:16.700 23:47:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.271 23:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:17.271 23:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:17.271 23:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.271 23:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:17.271 23:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.532 23:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:17.532 23:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:17.532 23:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.532 23:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:17.532 23:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.793 23:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:17.793 23:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:17.793 23:47:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.793 23:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:17.793 23:47:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.054 23:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:18.054 23:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:18.054 23:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.054 23:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:18.054 23:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.623 23:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:18.623 23:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:18.623 23:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.623 23:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:18.623 23:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.883 23:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:18.883 23:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:18.883 23:47:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.883 23:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:18.883 23:47:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.144 23:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:19.144 23:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:19.144 23:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.144 23:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:19.144 23:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.404 23:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:19.404 23:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:19.404 23:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.404 23:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:19.404 23:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.664 23:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:19.664 23:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:19.664 23:47:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.664 23:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:19.664 23:47:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.235 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:20.236 23:47:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:20.236 23:47:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.236 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:20.236 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.497 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:20.497 23:47:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:20.497 23:47:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.497 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:20.497 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.758 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 333771 00:11:20.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (333771) - No such process 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 333771 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.758 rmmod nvme_tcp 00:11:20.758 rmmod nvme_fabrics 00:11:20.758 rmmod nvme_keyring 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 333667 ']' 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 333667 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@942 -- # '[' -z 333667 ']' 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # kill -0 333667 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # uname 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 333667 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 333667' 00:11:20.758 killing process with pid 333667 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@961 -- # kill 333667 00:11:20.758 23:47:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # wait 333667 00:11:21.019 23:47:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:21.019 23:47:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:21.019 23:47:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:21.019 23:47:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.019 23:47:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:21.019 23:47:36 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.019 23:47:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.019 23:47:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.932 23:47:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:22.932 00:11:22.932 real 0m21.829s 00:11:22.932 user 0m42.463s 00:11:22.932 sys 0m9.336s 00:11:22.932 23:47:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:22.932 23:47:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.932 ************************************ 00:11:22.932 END TEST nvmf_connect_stress 00:11:22.932 ************************************ 00:11:23.194 23:47:38 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:23.194 23:47:38 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:23.194 23:47:38 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:23.194 23:47:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:23.194 23:47:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:23.194 ************************************ 00:11:23.194 START TEST nvmf_fused_ordering 00:11:23.194 ************************************ 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:23.194 * Looking for test storage... 00:11:23.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.194 23:47:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:31.334 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:31.334 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:31.334 Found net devices under 0000:31:00.0: cvl_0_0 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.334 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:31.335 Found net devices under 0000:31:00.1: cvl_0_1 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.335 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:11:31.595 00:11:31.595 --- 10.0.0.2 ping statistics --- 00:11:31.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.595 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:11:31.595 00:11:31.595 --- 10.0.0.1 ping statistics --- 00:11:31.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.595 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.595 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=340621 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 340621 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@823 -- # '[' -z 340621 ']' 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:31.596 23:47:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:31.596 [2024-07-15 23:47:46.654595] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:11:31.596 [2024-07-15 23:47:46.654658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.596 [2024-07-15 23:47:46.750165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.856 [2024-07-15 23:47:46.843859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.856 [2024-07-15 23:47:46.843922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.856 [2024-07-15 23:47:46.843930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.856 [2024-07-15 23:47:46.843937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.856 [2024-07-15 23:47:46.843943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.856 [2024-07-15 23:47:46.843972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # return 0 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.429 [2024-07-15 23:47:47.491147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.429 [2024-07-15 23:47:47.507387] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.429 NULL1 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:32.429 23:47:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:32.429 [2024-07-15 23:47:47.564663] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:11:32.429 [2024-07-15 23:47:47.564706] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340760 ] 00:11:32.999 Attached to nqn.2016-06.io.spdk:cnode1 00:11:32.999 Namespace ID: 1 size: 1GB 00:11:32.999 fused_ordering(0) 00:11:32.999 fused_ordering(1) 00:11:32.999 fused_ordering(2) 00:11:32.999 fused_ordering(3) 00:11:32.999 fused_ordering(4) 00:11:32.999 fused_ordering(5) 00:11:32.999 fused_ordering(6) 00:11:32.999 fused_ordering(7) 00:11:32.999 fused_ordering(8) 00:11:32.999 fused_ordering(9) 00:11:32.999 fused_ordering(10) 00:11:32.999 fused_ordering(11) 00:11:32.999 fused_ordering(12) 00:11:32.999 fused_ordering(13) 00:11:32.999 fused_ordering(14) 00:11:32.999 fused_ordering(15) 00:11:32.999 fused_ordering(16) 00:11:32.999 fused_ordering(17) 00:11:32.999 fused_ordering(18) 00:11:32.999 fused_ordering(19) 00:11:32.999 fused_ordering(20) 00:11:32.999 fused_ordering(21) 00:11:32.999 fused_ordering(22) 00:11:32.999 fused_ordering(23) 00:11:32.999 fused_ordering(24) 00:11:32.999 fused_ordering(25) 00:11:32.999 fused_ordering(26) 00:11:32.999 fused_ordering(27) 00:11:32.999 fused_ordering(28) 00:11:32.999 fused_ordering(29) 00:11:32.999 fused_ordering(30) 00:11:32.999 fused_ordering(31) 00:11:32.999 fused_ordering(32) 00:11:32.999 fused_ordering(33) 00:11:32.999 fused_ordering(34) 00:11:32.999 fused_ordering(35) 00:11:32.999 fused_ordering(36) 00:11:32.999 fused_ordering(37) 00:11:32.999 fused_ordering(38) 00:11:32.999 fused_ordering(39) 00:11:32.999 fused_ordering(40) 00:11:32.999 fused_ordering(41) 00:11:32.999 fused_ordering(42) 00:11:32.999 fused_ordering(43) 00:11:32.999 fused_ordering(44) 00:11:32.999 fused_ordering(45) 00:11:32.999 fused_ordering(46) 00:11:32.999 fused_ordering(47) 00:11:32.999 fused_ordering(48) 00:11:32.999 fused_ordering(49) 00:11:32.999 fused_ordering(50) 00:11:32.999 fused_ordering(51) 00:11:32.999 fused_ordering(52) 00:11:32.999 fused_ordering(53) 00:11:32.999 fused_ordering(54) 00:11:32.999 fused_ordering(55) 00:11:32.999 fused_ordering(56) 00:11:32.999 fused_ordering(57) 00:11:32.999 fused_ordering(58) 00:11:32.999 fused_ordering(59) 00:11:32.999 fused_ordering(60) 00:11:32.999 fused_ordering(61) 00:11:32.999 fused_ordering(62) 00:11:32.999 fused_ordering(63) 00:11:32.999 fused_ordering(64) 00:11:32.999 fused_ordering(65) 00:11:32.999 fused_ordering(66) 00:11:32.999 fused_ordering(67) 00:11:32.999 fused_ordering(68) 00:11:32.999 fused_ordering(69) 00:11:32.999 fused_ordering(70) 00:11:32.999 fused_ordering(71) 00:11:32.999 fused_ordering(72) 00:11:32.999 fused_ordering(73) 00:11:33.000 fused_ordering(74) 00:11:33.000 fused_ordering(75) 00:11:33.000 fused_ordering(76) 00:11:33.000 fused_ordering(77) 00:11:33.000 fused_ordering(78) 00:11:33.000 fused_ordering(79) 00:11:33.000 fused_ordering(80) 00:11:33.000 fused_ordering(81) 00:11:33.000 fused_ordering(82) 00:11:33.000 fused_ordering(83) 00:11:33.000 fused_ordering(84) 00:11:33.000 fused_ordering(85) 00:11:33.000 fused_ordering(86) 00:11:33.000 fused_ordering(87) 00:11:33.000 fused_ordering(88) 00:11:33.000 fused_ordering(89) 00:11:33.000 fused_ordering(90) 00:11:33.000 fused_ordering(91) 00:11:33.000 fused_ordering(92) 00:11:33.000 fused_ordering(93) 00:11:33.000 fused_ordering(94) 00:11:33.000 fused_ordering(95) 00:11:33.000 fused_ordering(96) 00:11:33.000 fused_ordering(97) 00:11:33.000 fused_ordering(98) 00:11:33.000 fused_ordering(99) 00:11:33.000 fused_ordering(100) 00:11:33.000 fused_ordering(101) 00:11:33.000 fused_ordering(102) 00:11:33.000 fused_ordering(103) 00:11:33.000 fused_ordering(104) 00:11:33.000 fused_ordering(105) 00:11:33.000 fused_ordering(106) 00:11:33.000 fused_ordering(107) 00:11:33.000 fused_ordering(108) 00:11:33.000 fused_ordering(109) 00:11:33.000 fused_ordering(110) 00:11:33.000 fused_ordering(111) 00:11:33.000 fused_ordering(112) 00:11:33.000 fused_ordering(113) 00:11:33.000 fused_ordering(114) 00:11:33.000 fused_ordering(115) 00:11:33.000 fused_ordering(116) 00:11:33.000 fused_ordering(117) 00:11:33.000 fused_ordering(118) 00:11:33.000 fused_ordering(119) 00:11:33.000 fused_ordering(120) 00:11:33.000 fused_ordering(121) 00:11:33.000 fused_ordering(122) 00:11:33.000 fused_ordering(123) 00:11:33.000 fused_ordering(124) 00:11:33.000 fused_ordering(125) 00:11:33.000 fused_ordering(126) 00:11:33.000 fused_ordering(127) 00:11:33.000 fused_ordering(128) 00:11:33.000 fused_ordering(129) 00:11:33.000 fused_ordering(130) 00:11:33.000 fused_ordering(131) 00:11:33.000 fused_ordering(132) 00:11:33.000 fused_ordering(133) 00:11:33.000 fused_ordering(134) 00:11:33.000 fused_ordering(135) 00:11:33.000 fused_ordering(136) 00:11:33.000 fused_ordering(137) 00:11:33.000 fused_ordering(138) 00:11:33.000 fused_ordering(139) 00:11:33.000 fused_ordering(140) 00:11:33.000 fused_ordering(141) 00:11:33.000 fused_ordering(142) 00:11:33.000 fused_ordering(143) 00:11:33.000 fused_ordering(144) 00:11:33.000 fused_ordering(145) 00:11:33.000 fused_ordering(146) 00:11:33.000 fused_ordering(147) 00:11:33.000 fused_ordering(148) 00:11:33.000 fused_ordering(149) 00:11:33.000 fused_ordering(150) 00:11:33.000 fused_ordering(151) 00:11:33.000 fused_ordering(152) 00:11:33.000 fused_ordering(153) 00:11:33.000 fused_ordering(154) 00:11:33.000 fused_ordering(155) 00:11:33.000 fused_ordering(156) 00:11:33.000 fused_ordering(157) 00:11:33.000 fused_ordering(158) 00:11:33.000 fused_ordering(159) 00:11:33.000 fused_ordering(160) 00:11:33.000 fused_ordering(161) 00:11:33.000 fused_ordering(162) 00:11:33.000 fused_ordering(163) 00:11:33.000 fused_ordering(164) 00:11:33.000 fused_ordering(165) 00:11:33.000 fused_ordering(166) 00:11:33.000 fused_ordering(167) 00:11:33.000 fused_ordering(168) 00:11:33.000 fused_ordering(169) 00:11:33.000 fused_ordering(170) 00:11:33.000 fused_ordering(171) 00:11:33.000 fused_ordering(172) 00:11:33.000 fused_ordering(173) 00:11:33.000 fused_ordering(174) 00:11:33.000 fused_ordering(175) 00:11:33.000 fused_ordering(176) 00:11:33.000 fused_ordering(177) 00:11:33.000 fused_ordering(178) 00:11:33.000 fused_ordering(179) 00:11:33.000 fused_ordering(180) 00:11:33.000 fused_ordering(181) 00:11:33.000 fused_ordering(182) 00:11:33.000 fused_ordering(183) 00:11:33.000 fused_ordering(184) 00:11:33.000 fused_ordering(185) 00:11:33.000 fused_ordering(186) 00:11:33.000 fused_ordering(187) 00:11:33.000 fused_ordering(188) 00:11:33.000 fused_ordering(189) 00:11:33.000 fused_ordering(190) 00:11:33.000 fused_ordering(191) 00:11:33.000 fused_ordering(192) 00:11:33.000 fused_ordering(193) 00:11:33.000 fused_ordering(194) 00:11:33.000 fused_ordering(195) 00:11:33.000 fused_ordering(196) 00:11:33.000 fused_ordering(197) 00:11:33.000 fused_ordering(198) 00:11:33.000 fused_ordering(199) 00:11:33.000 fused_ordering(200) 00:11:33.000 fused_ordering(201) 00:11:33.000 fused_ordering(202) 00:11:33.000 fused_ordering(203) 00:11:33.000 fused_ordering(204) 00:11:33.000 fused_ordering(205) 00:11:33.260 fused_ordering(206) 00:11:33.260 fused_ordering(207) 00:11:33.260 fused_ordering(208) 00:11:33.260 fused_ordering(209) 00:11:33.260 fused_ordering(210) 00:11:33.260 fused_ordering(211) 00:11:33.260 fused_ordering(212) 00:11:33.260 fused_ordering(213) 00:11:33.260 fused_ordering(214) 00:11:33.260 fused_ordering(215) 00:11:33.260 fused_ordering(216) 00:11:33.260 fused_ordering(217) 00:11:33.260 fused_ordering(218) 00:11:33.260 fused_ordering(219) 00:11:33.260 fused_ordering(220) 00:11:33.260 fused_ordering(221) 00:11:33.260 fused_ordering(222) 00:11:33.260 fused_ordering(223) 00:11:33.260 fused_ordering(224) 00:11:33.260 fused_ordering(225) 00:11:33.260 fused_ordering(226) 00:11:33.260 fused_ordering(227) 00:11:33.260 fused_ordering(228) 00:11:33.260 fused_ordering(229) 00:11:33.260 fused_ordering(230) 00:11:33.260 fused_ordering(231) 00:11:33.260 fused_ordering(232) 00:11:33.260 fused_ordering(233) 00:11:33.260 fused_ordering(234) 00:11:33.260 fused_ordering(235) 00:11:33.260 fused_ordering(236) 00:11:33.260 fused_ordering(237) 00:11:33.260 fused_ordering(238) 00:11:33.260 fused_ordering(239) 00:11:33.260 fused_ordering(240) 00:11:33.260 fused_ordering(241) 00:11:33.260 fused_ordering(242) 00:11:33.260 fused_ordering(243) 00:11:33.260 fused_ordering(244) 00:11:33.260 fused_ordering(245) 00:11:33.260 fused_ordering(246) 00:11:33.260 fused_ordering(247) 00:11:33.260 fused_ordering(248) 00:11:33.260 fused_ordering(249) 00:11:33.260 fused_ordering(250) 00:11:33.260 fused_ordering(251) 00:11:33.260 fused_ordering(252) 00:11:33.260 fused_ordering(253) 00:11:33.260 fused_ordering(254) 00:11:33.260 fused_ordering(255) 00:11:33.260 fused_ordering(256) 00:11:33.260 fused_ordering(257) 00:11:33.260 fused_ordering(258) 00:11:33.260 fused_ordering(259) 00:11:33.260 fused_ordering(260) 00:11:33.260 fused_ordering(261) 00:11:33.260 fused_ordering(262) 00:11:33.260 fused_ordering(263) 00:11:33.260 fused_ordering(264) 00:11:33.260 fused_ordering(265) 00:11:33.260 fused_ordering(266) 00:11:33.260 fused_ordering(267) 00:11:33.260 fused_ordering(268) 00:11:33.260 fused_ordering(269) 00:11:33.260 fused_ordering(270) 00:11:33.260 fused_ordering(271) 00:11:33.260 fused_ordering(272) 00:11:33.260 fused_ordering(273) 00:11:33.260 fused_ordering(274) 00:11:33.260 fused_ordering(275) 00:11:33.260 fused_ordering(276) 00:11:33.260 fused_ordering(277) 00:11:33.260 fused_ordering(278) 00:11:33.260 fused_ordering(279) 00:11:33.260 fused_ordering(280) 00:11:33.260 fused_ordering(281) 00:11:33.260 fused_ordering(282) 00:11:33.260 fused_ordering(283) 00:11:33.260 fused_ordering(284) 00:11:33.260 fused_ordering(285) 00:11:33.260 fused_ordering(286) 00:11:33.260 fused_ordering(287) 00:11:33.260 fused_ordering(288) 00:11:33.260 fused_ordering(289) 00:11:33.260 fused_ordering(290) 00:11:33.260 fused_ordering(291) 00:11:33.260 fused_ordering(292) 00:11:33.260 fused_ordering(293) 00:11:33.260 fused_ordering(294) 00:11:33.260 fused_ordering(295) 00:11:33.260 fused_ordering(296) 00:11:33.260 fused_ordering(297) 00:11:33.260 fused_ordering(298) 00:11:33.260 fused_ordering(299) 00:11:33.260 fused_ordering(300) 00:11:33.260 fused_ordering(301) 00:11:33.260 fused_ordering(302) 00:11:33.260 fused_ordering(303) 00:11:33.260 fused_ordering(304) 00:11:33.260 fused_ordering(305) 00:11:33.260 fused_ordering(306) 00:11:33.260 fused_ordering(307) 00:11:33.260 fused_ordering(308) 00:11:33.260 fused_ordering(309) 00:11:33.260 fused_ordering(310) 00:11:33.260 fused_ordering(311) 00:11:33.260 fused_ordering(312) 00:11:33.260 fused_ordering(313) 00:11:33.260 fused_ordering(314) 00:11:33.260 fused_ordering(315) 00:11:33.260 fused_ordering(316) 00:11:33.260 fused_ordering(317) 00:11:33.260 fused_ordering(318) 00:11:33.260 fused_ordering(319) 00:11:33.260 fused_ordering(320) 00:11:33.260 fused_ordering(321) 00:11:33.260 fused_ordering(322) 00:11:33.260 fused_ordering(323) 00:11:33.260 fused_ordering(324) 00:11:33.260 fused_ordering(325) 00:11:33.260 fused_ordering(326) 00:11:33.260 fused_ordering(327) 00:11:33.260 fused_ordering(328) 00:11:33.260 fused_ordering(329) 00:11:33.260 fused_ordering(330) 00:11:33.260 fused_ordering(331) 00:11:33.260 fused_ordering(332) 00:11:33.260 fused_ordering(333) 00:11:33.260 fused_ordering(334) 00:11:33.260 fused_ordering(335) 00:11:33.260 fused_ordering(336) 00:11:33.260 fused_ordering(337) 00:11:33.260 fused_ordering(338) 00:11:33.260 fused_ordering(339) 00:11:33.260 fused_ordering(340) 00:11:33.260 fused_ordering(341) 00:11:33.260 fused_ordering(342) 00:11:33.260 fused_ordering(343) 00:11:33.260 fused_ordering(344) 00:11:33.260 fused_ordering(345) 00:11:33.260 fused_ordering(346) 00:11:33.260 fused_ordering(347) 00:11:33.260 fused_ordering(348) 00:11:33.260 fused_ordering(349) 00:11:33.260 fused_ordering(350) 00:11:33.260 fused_ordering(351) 00:11:33.260 fused_ordering(352) 00:11:33.260 fused_ordering(353) 00:11:33.260 fused_ordering(354) 00:11:33.260 fused_ordering(355) 00:11:33.260 fused_ordering(356) 00:11:33.260 fused_ordering(357) 00:11:33.260 fused_ordering(358) 00:11:33.260 fused_ordering(359) 00:11:33.260 fused_ordering(360) 00:11:33.260 fused_ordering(361) 00:11:33.260 fused_ordering(362) 00:11:33.260 fused_ordering(363) 00:11:33.260 fused_ordering(364) 00:11:33.260 fused_ordering(365) 00:11:33.260 fused_ordering(366) 00:11:33.260 fused_ordering(367) 00:11:33.260 fused_ordering(368) 00:11:33.260 fused_ordering(369) 00:11:33.260 fused_ordering(370) 00:11:33.260 fused_ordering(371) 00:11:33.260 fused_ordering(372) 00:11:33.260 fused_ordering(373) 00:11:33.260 fused_ordering(374) 00:11:33.260 fused_ordering(375) 00:11:33.260 fused_ordering(376) 00:11:33.260 fused_ordering(377) 00:11:33.260 fused_ordering(378) 00:11:33.260 fused_ordering(379) 00:11:33.260 fused_ordering(380) 00:11:33.260 fused_ordering(381) 00:11:33.260 fused_ordering(382) 00:11:33.260 fused_ordering(383) 00:11:33.260 fused_ordering(384) 00:11:33.260 fused_ordering(385) 00:11:33.260 fused_ordering(386) 00:11:33.260 fused_ordering(387) 00:11:33.260 fused_ordering(388) 00:11:33.260 fused_ordering(389) 00:11:33.260 fused_ordering(390) 00:11:33.260 fused_ordering(391) 00:11:33.260 fused_ordering(392) 00:11:33.260 fused_ordering(393) 00:11:33.260 fused_ordering(394) 00:11:33.260 fused_ordering(395) 00:11:33.260 fused_ordering(396) 00:11:33.260 fused_ordering(397) 00:11:33.260 fused_ordering(398) 00:11:33.260 fused_ordering(399) 00:11:33.260 fused_ordering(400) 00:11:33.260 fused_ordering(401) 00:11:33.260 fused_ordering(402) 00:11:33.260 fused_ordering(403) 00:11:33.260 fused_ordering(404) 00:11:33.260 fused_ordering(405) 00:11:33.260 fused_ordering(406) 00:11:33.260 fused_ordering(407) 00:11:33.260 fused_ordering(408) 00:11:33.260 fused_ordering(409) 00:11:33.260 fused_ordering(410) 00:11:33.831 fused_ordering(411) 00:11:33.831 fused_ordering(412) 00:11:33.831 fused_ordering(413) 00:11:33.831 fused_ordering(414) 00:11:33.831 fused_ordering(415) 00:11:33.831 fused_ordering(416) 00:11:33.831 fused_ordering(417) 00:11:33.831 fused_ordering(418) 00:11:33.831 fused_ordering(419) 00:11:33.831 fused_ordering(420) 00:11:33.831 fused_ordering(421) 00:11:33.831 fused_ordering(422) 00:11:33.831 fused_ordering(423) 00:11:33.831 fused_ordering(424) 00:11:33.831 fused_ordering(425) 00:11:33.831 fused_ordering(426) 00:11:33.831 fused_ordering(427) 00:11:33.831 fused_ordering(428) 00:11:33.831 fused_ordering(429) 00:11:33.831 fused_ordering(430) 00:11:33.831 fused_ordering(431) 00:11:33.831 fused_ordering(432) 00:11:33.831 fused_ordering(433) 00:11:33.831 fused_ordering(434) 00:11:33.831 fused_ordering(435) 00:11:33.831 fused_ordering(436) 00:11:33.831 fused_ordering(437) 00:11:33.831 fused_ordering(438) 00:11:33.831 fused_ordering(439) 00:11:33.831 fused_ordering(440) 00:11:33.831 fused_ordering(441) 00:11:33.831 fused_ordering(442) 00:11:33.831 fused_ordering(443) 00:11:33.831 fused_ordering(444) 00:11:33.831 fused_ordering(445) 00:11:33.831 fused_ordering(446) 00:11:33.831 fused_ordering(447) 00:11:33.831 fused_ordering(448) 00:11:33.831 fused_ordering(449) 00:11:33.831 fused_ordering(450) 00:11:33.831 fused_ordering(451) 00:11:33.831 fused_ordering(452) 00:11:33.831 fused_ordering(453) 00:11:33.831 fused_ordering(454) 00:11:33.831 fused_ordering(455) 00:11:33.831 fused_ordering(456) 00:11:33.831 fused_ordering(457) 00:11:33.831 fused_ordering(458) 00:11:33.831 fused_ordering(459) 00:11:33.831 fused_ordering(460) 00:11:33.831 fused_ordering(461) 00:11:33.831 fused_ordering(462) 00:11:33.831 fused_ordering(463) 00:11:33.831 fused_ordering(464) 00:11:33.831 fused_ordering(465) 00:11:33.831 fused_ordering(466) 00:11:33.831 fused_ordering(467) 00:11:33.831 fused_ordering(468) 00:11:33.831 fused_ordering(469) 00:11:33.831 fused_ordering(470) 00:11:33.831 fused_ordering(471) 00:11:33.831 fused_ordering(472) 00:11:33.831 fused_ordering(473) 00:11:33.831 fused_ordering(474) 00:11:33.831 fused_ordering(475) 00:11:33.831 fused_ordering(476) 00:11:33.831 fused_ordering(477) 00:11:33.831 fused_ordering(478) 00:11:33.831 fused_ordering(479) 00:11:33.831 fused_ordering(480) 00:11:33.831 fused_ordering(481) 00:11:33.831 fused_ordering(482) 00:11:33.831 fused_ordering(483) 00:11:33.831 fused_ordering(484) 00:11:33.831 fused_ordering(485) 00:11:33.831 fused_ordering(486) 00:11:33.831 fused_ordering(487) 00:11:33.831 fused_ordering(488) 00:11:33.831 fused_ordering(489) 00:11:33.831 fused_ordering(490) 00:11:33.831 fused_ordering(491) 00:11:33.831 fused_ordering(492) 00:11:33.831 fused_ordering(493) 00:11:33.831 fused_ordering(494) 00:11:33.831 fused_ordering(495) 00:11:33.831 fused_ordering(496) 00:11:33.831 fused_ordering(497) 00:11:33.831 fused_ordering(498) 00:11:33.831 fused_ordering(499) 00:11:33.831 fused_ordering(500) 00:11:33.831 fused_ordering(501) 00:11:33.831 fused_ordering(502) 00:11:33.831 fused_ordering(503) 00:11:33.831 fused_ordering(504) 00:11:33.831 fused_ordering(505) 00:11:33.831 fused_ordering(506) 00:11:33.831 fused_ordering(507) 00:11:33.831 fused_ordering(508) 00:11:33.831 fused_ordering(509) 00:11:33.831 fused_ordering(510) 00:11:33.831 fused_ordering(511) 00:11:33.831 fused_ordering(512) 00:11:33.831 fused_ordering(513) 00:11:33.831 fused_ordering(514) 00:11:33.831 fused_ordering(515) 00:11:33.831 fused_ordering(516) 00:11:33.831 fused_ordering(517) 00:11:33.831 fused_ordering(518) 00:11:33.831 fused_ordering(519) 00:11:33.831 fused_ordering(520) 00:11:33.831 fused_ordering(521) 00:11:33.831 fused_ordering(522) 00:11:33.831 fused_ordering(523) 00:11:33.831 fused_ordering(524) 00:11:33.831 fused_ordering(525) 00:11:33.831 fused_ordering(526) 00:11:33.831 fused_ordering(527) 00:11:33.831 fused_ordering(528) 00:11:33.831 fused_ordering(529) 00:11:33.831 fused_ordering(530) 00:11:33.831 fused_ordering(531) 00:11:33.831 fused_ordering(532) 00:11:33.831 fused_ordering(533) 00:11:33.831 fused_ordering(534) 00:11:33.831 fused_ordering(535) 00:11:33.831 fused_ordering(536) 00:11:33.831 fused_ordering(537) 00:11:33.831 fused_ordering(538) 00:11:33.831 fused_ordering(539) 00:11:33.831 fused_ordering(540) 00:11:33.831 fused_ordering(541) 00:11:33.831 fused_ordering(542) 00:11:33.831 fused_ordering(543) 00:11:33.831 fused_ordering(544) 00:11:33.831 fused_ordering(545) 00:11:33.831 fused_ordering(546) 00:11:33.831 fused_ordering(547) 00:11:33.831 fused_ordering(548) 00:11:33.831 fused_ordering(549) 00:11:33.831 fused_ordering(550) 00:11:33.831 fused_ordering(551) 00:11:33.831 fused_ordering(552) 00:11:33.831 fused_ordering(553) 00:11:33.831 fused_ordering(554) 00:11:33.831 fused_ordering(555) 00:11:33.831 fused_ordering(556) 00:11:33.831 fused_ordering(557) 00:11:33.831 fused_ordering(558) 00:11:33.831 fused_ordering(559) 00:11:33.831 fused_ordering(560) 00:11:33.831 fused_ordering(561) 00:11:33.831 fused_ordering(562) 00:11:33.831 fused_ordering(563) 00:11:33.831 fused_ordering(564) 00:11:33.831 fused_ordering(565) 00:11:33.831 fused_ordering(566) 00:11:33.831 fused_ordering(567) 00:11:33.831 fused_ordering(568) 00:11:33.831 fused_ordering(569) 00:11:33.831 fused_ordering(570) 00:11:33.831 fused_ordering(571) 00:11:33.831 fused_ordering(572) 00:11:33.831 fused_ordering(573) 00:11:33.831 fused_ordering(574) 00:11:33.831 fused_ordering(575) 00:11:33.831 fused_ordering(576) 00:11:33.831 fused_ordering(577) 00:11:33.831 fused_ordering(578) 00:11:33.831 fused_ordering(579) 00:11:33.831 fused_ordering(580) 00:11:33.831 fused_ordering(581) 00:11:33.831 fused_ordering(582) 00:11:33.831 fused_ordering(583) 00:11:33.831 fused_ordering(584) 00:11:33.831 fused_ordering(585) 00:11:33.831 fused_ordering(586) 00:11:33.831 fused_ordering(587) 00:11:33.831 fused_ordering(588) 00:11:33.831 fused_ordering(589) 00:11:33.831 fused_ordering(590) 00:11:33.831 fused_ordering(591) 00:11:33.831 fused_ordering(592) 00:11:33.831 fused_ordering(593) 00:11:33.831 fused_ordering(594) 00:11:33.831 fused_ordering(595) 00:11:33.831 fused_ordering(596) 00:11:33.831 fused_ordering(597) 00:11:33.831 fused_ordering(598) 00:11:33.831 fused_ordering(599) 00:11:33.831 fused_ordering(600) 00:11:33.831 fused_ordering(601) 00:11:33.831 fused_ordering(602) 00:11:33.831 fused_ordering(603) 00:11:33.831 fused_ordering(604) 00:11:33.831 fused_ordering(605) 00:11:33.831 fused_ordering(606) 00:11:33.831 fused_ordering(607) 00:11:33.831 fused_ordering(608) 00:11:33.831 fused_ordering(609) 00:11:33.831 fused_ordering(610) 00:11:33.831 fused_ordering(611) 00:11:33.831 fused_ordering(612) 00:11:33.831 fused_ordering(613) 00:11:33.831 fused_ordering(614) 00:11:33.831 fused_ordering(615) 00:11:34.401 fused_ordering(616) 00:11:34.401 fused_ordering(617) 00:11:34.401 fused_ordering(618) 00:11:34.401 fused_ordering(619) 00:11:34.401 fused_ordering(620) 00:11:34.401 fused_ordering(621) 00:11:34.401 fused_ordering(622) 00:11:34.401 fused_ordering(623) 00:11:34.401 fused_ordering(624) 00:11:34.401 fused_ordering(625) 00:11:34.401 fused_ordering(626) 00:11:34.401 fused_ordering(627) 00:11:34.401 fused_ordering(628) 00:11:34.401 fused_ordering(629) 00:11:34.401 fused_ordering(630) 00:11:34.401 fused_ordering(631) 00:11:34.401 fused_ordering(632) 00:11:34.401 fused_ordering(633) 00:11:34.401 fused_ordering(634) 00:11:34.401 fused_ordering(635) 00:11:34.401 fused_ordering(636) 00:11:34.401 fused_ordering(637) 00:11:34.401 fused_ordering(638) 00:11:34.401 fused_ordering(639) 00:11:34.401 fused_ordering(640) 00:11:34.401 fused_ordering(641) 00:11:34.401 fused_ordering(642) 00:11:34.401 fused_ordering(643) 00:11:34.401 fused_ordering(644) 00:11:34.401 fused_ordering(645) 00:11:34.401 fused_ordering(646) 00:11:34.401 fused_ordering(647) 00:11:34.401 fused_ordering(648) 00:11:34.401 fused_ordering(649) 00:11:34.401 fused_ordering(650) 00:11:34.401 fused_ordering(651) 00:11:34.401 fused_ordering(652) 00:11:34.401 fused_ordering(653) 00:11:34.401 fused_ordering(654) 00:11:34.401 fused_ordering(655) 00:11:34.401 fused_ordering(656) 00:11:34.401 fused_ordering(657) 00:11:34.401 fused_ordering(658) 00:11:34.401 fused_ordering(659) 00:11:34.401 fused_ordering(660) 00:11:34.401 fused_ordering(661) 00:11:34.401 fused_ordering(662) 00:11:34.401 fused_ordering(663) 00:11:34.401 fused_ordering(664) 00:11:34.401 fused_ordering(665) 00:11:34.401 fused_ordering(666) 00:11:34.401 fused_ordering(667) 00:11:34.401 fused_ordering(668) 00:11:34.401 fused_ordering(669) 00:11:34.401 fused_ordering(670) 00:11:34.401 fused_ordering(671) 00:11:34.401 fused_ordering(672) 00:11:34.401 fused_ordering(673) 00:11:34.401 fused_ordering(674) 00:11:34.401 fused_ordering(675) 00:11:34.401 fused_ordering(676) 00:11:34.401 fused_ordering(677) 00:11:34.401 fused_ordering(678) 00:11:34.401 fused_ordering(679) 00:11:34.401 fused_ordering(680) 00:11:34.401 fused_ordering(681) 00:11:34.401 fused_ordering(682) 00:11:34.401 fused_ordering(683) 00:11:34.401 fused_ordering(684) 00:11:34.401 fused_ordering(685) 00:11:34.401 fused_ordering(686) 00:11:34.401 fused_ordering(687) 00:11:34.401 fused_ordering(688) 00:11:34.401 fused_ordering(689) 00:11:34.401 fused_ordering(690) 00:11:34.401 fused_ordering(691) 00:11:34.401 fused_ordering(692) 00:11:34.401 fused_ordering(693) 00:11:34.401 fused_ordering(694) 00:11:34.401 fused_ordering(695) 00:11:34.401 fused_ordering(696) 00:11:34.401 fused_ordering(697) 00:11:34.401 fused_ordering(698) 00:11:34.401 fused_ordering(699) 00:11:34.401 fused_ordering(700) 00:11:34.401 fused_ordering(701) 00:11:34.401 fused_ordering(702) 00:11:34.401 fused_ordering(703) 00:11:34.401 fused_ordering(704) 00:11:34.401 fused_ordering(705) 00:11:34.401 fused_ordering(706) 00:11:34.401 fused_ordering(707) 00:11:34.401 fused_ordering(708) 00:11:34.401 fused_ordering(709) 00:11:34.401 fused_ordering(710) 00:11:34.401 fused_ordering(711) 00:11:34.401 fused_ordering(712) 00:11:34.401 fused_ordering(713) 00:11:34.401 fused_ordering(714) 00:11:34.401 fused_ordering(715) 00:11:34.401 fused_ordering(716) 00:11:34.401 fused_ordering(717) 00:11:34.401 fused_ordering(718) 00:11:34.401 fused_ordering(719) 00:11:34.401 fused_ordering(720) 00:11:34.401 fused_ordering(721) 00:11:34.401 fused_ordering(722) 00:11:34.401 fused_ordering(723) 00:11:34.401 fused_ordering(724) 00:11:34.401 fused_ordering(725) 00:11:34.401 fused_ordering(726) 00:11:34.401 fused_ordering(727) 00:11:34.401 fused_ordering(728) 00:11:34.401 fused_ordering(729) 00:11:34.401 fused_ordering(730) 00:11:34.401 fused_ordering(731) 00:11:34.401 fused_ordering(732) 00:11:34.401 fused_ordering(733) 00:11:34.401 fused_ordering(734) 00:11:34.401 fused_ordering(735) 00:11:34.401 fused_ordering(736) 00:11:34.401 fused_ordering(737) 00:11:34.401 fused_ordering(738) 00:11:34.401 fused_ordering(739) 00:11:34.401 fused_ordering(740) 00:11:34.401 fused_ordering(741) 00:11:34.401 fused_ordering(742) 00:11:34.401 fused_ordering(743) 00:11:34.401 fused_ordering(744) 00:11:34.401 fused_ordering(745) 00:11:34.401 fused_ordering(746) 00:11:34.401 fused_ordering(747) 00:11:34.401 fused_ordering(748) 00:11:34.401 fused_ordering(749) 00:11:34.401 fused_ordering(750) 00:11:34.401 fused_ordering(751) 00:11:34.401 fused_ordering(752) 00:11:34.401 fused_ordering(753) 00:11:34.401 fused_ordering(754) 00:11:34.401 fused_ordering(755) 00:11:34.402 fused_ordering(756) 00:11:34.402 fused_ordering(757) 00:11:34.402 fused_ordering(758) 00:11:34.402 fused_ordering(759) 00:11:34.402 fused_ordering(760) 00:11:34.402 fused_ordering(761) 00:11:34.402 fused_ordering(762) 00:11:34.402 fused_ordering(763) 00:11:34.402 fused_ordering(764) 00:11:34.402 fused_ordering(765) 00:11:34.402 fused_ordering(766) 00:11:34.402 fused_ordering(767) 00:11:34.402 fused_ordering(768) 00:11:34.402 fused_ordering(769) 00:11:34.402 fused_ordering(770) 00:11:34.402 fused_ordering(771) 00:11:34.402 fused_ordering(772) 00:11:34.402 fused_ordering(773) 00:11:34.402 fused_ordering(774) 00:11:34.402 fused_ordering(775) 00:11:34.402 fused_ordering(776) 00:11:34.402 fused_ordering(777) 00:11:34.402 fused_ordering(778) 00:11:34.402 fused_ordering(779) 00:11:34.402 fused_ordering(780) 00:11:34.402 fused_ordering(781) 00:11:34.402 fused_ordering(782) 00:11:34.402 fused_ordering(783) 00:11:34.402 fused_ordering(784) 00:11:34.402 fused_ordering(785) 00:11:34.402 fused_ordering(786) 00:11:34.402 fused_ordering(787) 00:11:34.402 fused_ordering(788) 00:11:34.402 fused_ordering(789) 00:11:34.402 fused_ordering(790) 00:11:34.402 fused_ordering(791) 00:11:34.402 fused_ordering(792) 00:11:34.402 fused_ordering(793) 00:11:34.402 fused_ordering(794) 00:11:34.402 fused_ordering(795) 00:11:34.402 fused_ordering(796) 00:11:34.402 fused_ordering(797) 00:11:34.402 fused_ordering(798) 00:11:34.402 fused_ordering(799) 00:11:34.402 fused_ordering(800) 00:11:34.402 fused_ordering(801) 00:11:34.402 fused_ordering(802) 00:11:34.402 fused_ordering(803) 00:11:34.402 fused_ordering(804) 00:11:34.402 fused_ordering(805) 00:11:34.402 fused_ordering(806) 00:11:34.402 fused_ordering(807) 00:11:34.402 fused_ordering(808) 00:11:34.402 fused_ordering(809) 00:11:34.402 fused_ordering(810) 00:11:34.402 fused_ordering(811) 00:11:34.402 fused_ordering(812) 00:11:34.402 fused_ordering(813) 00:11:34.402 fused_ordering(814) 00:11:34.402 fused_ordering(815) 00:11:34.402 fused_ordering(816) 00:11:34.402 fused_ordering(817) 00:11:34.402 fused_ordering(818) 00:11:34.402 fused_ordering(819) 00:11:34.402 fused_ordering(820) 00:11:34.971 fused_ordering(821) 00:11:34.971 fused_ordering(822) 00:11:34.971 fused_ordering(823) 00:11:34.971 fused_ordering(824) 00:11:34.971 fused_ordering(825) 00:11:34.971 fused_ordering(826) 00:11:34.971 fused_ordering(827) 00:11:34.971 fused_ordering(828) 00:11:34.971 fused_ordering(829) 00:11:34.971 fused_ordering(830) 00:11:34.971 fused_ordering(831) 00:11:34.971 fused_ordering(832) 00:11:34.971 fused_ordering(833) 00:11:34.971 fused_ordering(834) 00:11:34.971 fused_ordering(835) 00:11:34.971 fused_ordering(836) 00:11:34.971 fused_ordering(837) 00:11:34.971 fused_ordering(838) 00:11:34.971 fused_ordering(839) 00:11:34.971 fused_ordering(840) 00:11:34.971 fused_ordering(841) 00:11:34.971 fused_ordering(842) 00:11:34.971 fused_ordering(843) 00:11:34.971 fused_ordering(844) 00:11:34.971 fused_ordering(845) 00:11:34.971 fused_ordering(846) 00:11:34.971 fused_ordering(847) 00:11:34.971 fused_ordering(848) 00:11:34.971 fused_ordering(849) 00:11:34.971 fused_ordering(850) 00:11:34.971 fused_ordering(851) 00:11:34.971 fused_ordering(852) 00:11:34.971 fused_ordering(853) 00:11:34.971 fused_ordering(854) 00:11:34.971 fused_ordering(855) 00:11:34.971 fused_ordering(856) 00:11:34.971 fused_ordering(857) 00:11:34.971 fused_ordering(858) 00:11:34.971 fused_ordering(859) 00:11:34.971 fused_ordering(860) 00:11:34.971 fused_ordering(861) 00:11:34.971 fused_ordering(862) 00:11:34.971 fused_ordering(863) 00:11:34.971 fused_ordering(864) 00:11:34.971 fused_ordering(865) 00:11:34.971 fused_ordering(866) 00:11:34.971 fused_ordering(867) 00:11:34.971 fused_ordering(868) 00:11:34.971 fused_ordering(869) 00:11:34.971 fused_ordering(870) 00:11:34.971 fused_ordering(871) 00:11:34.971 fused_ordering(872) 00:11:34.971 fused_ordering(873) 00:11:34.971 fused_ordering(874) 00:11:34.971 fused_ordering(875) 00:11:34.971 fused_ordering(876) 00:11:34.971 fused_ordering(877) 00:11:34.971 fused_ordering(878) 00:11:34.971 fused_ordering(879) 00:11:34.971 fused_ordering(880) 00:11:34.971 fused_ordering(881) 00:11:34.971 fused_ordering(882) 00:11:34.971 fused_ordering(883) 00:11:34.971 fused_ordering(884) 00:11:34.971 fused_ordering(885) 00:11:34.971 fused_ordering(886) 00:11:34.971 fused_ordering(887) 00:11:34.971 fused_ordering(888) 00:11:34.971 fused_ordering(889) 00:11:34.971 fused_ordering(890) 00:11:34.971 fused_ordering(891) 00:11:34.971 fused_ordering(892) 00:11:34.971 fused_ordering(893) 00:11:34.971 fused_ordering(894) 00:11:34.971 fused_ordering(895) 00:11:34.971 fused_ordering(896) 00:11:34.971 fused_ordering(897) 00:11:34.971 fused_ordering(898) 00:11:34.971 fused_ordering(899) 00:11:34.971 fused_ordering(900) 00:11:34.971 fused_ordering(901) 00:11:34.971 fused_ordering(902) 00:11:34.971 fused_ordering(903) 00:11:34.971 fused_ordering(904) 00:11:34.971 fused_ordering(905) 00:11:34.971 fused_ordering(906) 00:11:34.971 fused_ordering(907) 00:11:34.971 fused_ordering(908) 00:11:34.971 fused_ordering(909) 00:11:34.971 fused_ordering(910) 00:11:34.971 fused_ordering(911) 00:11:34.971 fused_ordering(912) 00:11:34.971 fused_ordering(913) 00:11:34.971 fused_ordering(914) 00:11:34.971 fused_ordering(915) 00:11:34.971 fused_ordering(916) 00:11:34.971 fused_ordering(917) 00:11:34.971 fused_ordering(918) 00:11:34.971 fused_ordering(919) 00:11:34.971 fused_ordering(920) 00:11:34.971 fused_ordering(921) 00:11:34.971 fused_ordering(922) 00:11:34.971 fused_ordering(923) 00:11:34.971 fused_ordering(924) 00:11:34.971 fused_ordering(925) 00:11:34.971 fused_ordering(926) 00:11:34.971 fused_ordering(927) 00:11:34.971 fused_ordering(928) 00:11:34.971 fused_ordering(929) 00:11:34.971 fused_ordering(930) 00:11:34.971 fused_ordering(931) 00:11:34.971 fused_ordering(932) 00:11:34.971 fused_ordering(933) 00:11:34.971 fused_ordering(934) 00:11:34.971 fused_ordering(935) 00:11:34.971 fused_ordering(936) 00:11:34.971 fused_ordering(937) 00:11:34.971 fused_ordering(938) 00:11:34.971 fused_ordering(939) 00:11:34.971 fused_ordering(940) 00:11:34.971 fused_ordering(941) 00:11:34.971 fused_ordering(942) 00:11:34.971 fused_ordering(943) 00:11:34.971 fused_ordering(944) 00:11:34.971 fused_ordering(945) 00:11:34.971 fused_ordering(946) 00:11:34.971 fused_ordering(947) 00:11:34.971 fused_ordering(948) 00:11:34.971 fused_ordering(949) 00:11:34.971 fused_ordering(950) 00:11:34.971 fused_ordering(951) 00:11:34.971 fused_ordering(952) 00:11:34.971 fused_ordering(953) 00:11:34.971 fused_ordering(954) 00:11:34.971 fused_ordering(955) 00:11:34.971 fused_ordering(956) 00:11:34.971 fused_ordering(957) 00:11:34.971 fused_ordering(958) 00:11:34.971 fused_ordering(959) 00:11:34.971 fused_ordering(960) 00:11:34.971 fused_ordering(961) 00:11:34.971 fused_ordering(962) 00:11:34.971 fused_ordering(963) 00:11:34.971 fused_ordering(964) 00:11:34.971 fused_ordering(965) 00:11:34.971 fused_ordering(966) 00:11:34.971 fused_ordering(967) 00:11:34.971 fused_ordering(968) 00:11:34.971 fused_ordering(969) 00:11:34.971 fused_ordering(970) 00:11:34.971 fused_ordering(971) 00:11:34.971 fused_ordering(972) 00:11:34.971 fused_ordering(973) 00:11:34.971 fused_ordering(974) 00:11:34.971 fused_ordering(975) 00:11:34.971 fused_ordering(976) 00:11:34.971 fused_ordering(977) 00:11:34.971 fused_ordering(978) 00:11:34.971 fused_ordering(979) 00:11:34.971 fused_ordering(980) 00:11:34.971 fused_ordering(981) 00:11:34.971 fused_ordering(982) 00:11:34.971 fused_ordering(983) 00:11:34.971 fused_ordering(984) 00:11:34.971 fused_ordering(985) 00:11:34.971 fused_ordering(986) 00:11:34.971 fused_ordering(987) 00:11:34.971 fused_ordering(988) 00:11:34.971 fused_ordering(989) 00:11:34.971 fused_ordering(990) 00:11:34.971 fused_ordering(991) 00:11:34.971 fused_ordering(992) 00:11:34.971 fused_ordering(993) 00:11:34.971 fused_ordering(994) 00:11:34.971 fused_ordering(995) 00:11:34.971 fused_ordering(996) 00:11:34.971 fused_ordering(997) 00:11:34.971 fused_ordering(998) 00:11:34.971 fused_ordering(999) 00:11:34.971 fused_ordering(1000) 00:11:34.971 fused_ordering(1001) 00:11:34.971 fused_ordering(1002) 00:11:34.971 fused_ordering(1003) 00:11:34.971 fused_ordering(1004) 00:11:34.971 fused_ordering(1005) 00:11:34.971 fused_ordering(1006) 00:11:34.971 fused_ordering(1007) 00:11:34.971 fused_ordering(1008) 00:11:34.971 fused_ordering(1009) 00:11:34.971 fused_ordering(1010) 00:11:34.971 fused_ordering(1011) 00:11:34.971 fused_ordering(1012) 00:11:34.971 fused_ordering(1013) 00:11:34.971 fused_ordering(1014) 00:11:34.971 fused_ordering(1015) 00:11:34.971 fused_ordering(1016) 00:11:34.971 fused_ordering(1017) 00:11:34.971 fused_ordering(1018) 00:11:34.971 fused_ordering(1019) 00:11:34.971 fused_ordering(1020) 00:11:34.971 fused_ordering(1021) 00:11:34.971 fused_ordering(1022) 00:11:34.971 fused_ordering(1023) 00:11:34.971 23:47:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:34.971 23:47:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:34.971 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.971 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:34.971 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.971 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:34.971 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.971 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.971 rmmod nvme_tcp 00:11:34.971 rmmod nvme_fabrics 00:11:34.971 rmmod nvme_keyring 00:11:34.971 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:35.231 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:35.231 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:35.231 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 340621 ']' 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 340621 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@942 -- # '[' -z 340621 ']' 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # kill -0 340621 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # uname 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 340621 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # echo 'killing process with pid 340621' 00:11:35.232 killing process with pid 340621 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@961 -- # kill 340621 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # wait 340621 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.232 23:47:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.774 23:47:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:37.774 00:11:37.774 real 0m14.275s 00:11:37.774 user 0m7.444s 00:11:37.774 sys 0m7.729s 00:11:37.774 23:47:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:37.774 23:47:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.774 ************************************ 00:11:37.774 END TEST nvmf_fused_ordering 00:11:37.774 ************************************ 00:11:37.774 23:47:52 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:37.774 23:47:52 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:37.774 23:47:52 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:37.774 23:47:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:37.774 23:47:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.774 ************************************ 00:11:37.774 START TEST nvmf_delete_subsystem 00:11:37.774 ************************************ 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:37.774 * Looking for test storage... 00:11:37.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.774 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.775 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.775 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.775 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.775 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:37.775 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:37.775 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:37.775 23:47:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:45.997 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:45.997 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:45.997 Found net devices under 0000:31:00.0: cvl_0_0 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:45.997 Found net devices under 0000:31:00.1: cvl_0_1 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:45.997 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:45.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:11:45.998 00:11:45.998 --- 10.0.0.2 ping statistics --- 00:11:45.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.998 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:11:45.998 00:11:45.998 --- 10.0.0.1 ping statistics --- 00:11:45.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.998 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=346144 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 346144 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@823 -- # '[' -z 346144 ']' 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:45.998 23:48:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:45.998 [2024-07-15 23:48:00.995976] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:11:45.998 [2024-07-15 23:48:00.996042] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:45.998 [2024-07-15 23:48:01.077323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:45.998 [2024-07-15 23:48:01.152613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.998 [2024-07-15 23:48:01.152653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.998 [2024-07-15 23:48:01.152661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.998 [2024-07-15 23:48:01.152667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.998 [2024-07-15 23:48:01.152673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.998 [2024-07-15 23:48:01.152812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.998 [2024-07-15 23:48:01.152814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # return 0 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.943 [2024-07-15 23:48:01.808367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.943 [2024-07-15 23:48:01.832550] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.943 NULL1 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.943 Delay0 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=346273 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:46.943 23:48:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:46.944 [2024-07-15 23:48:01.929220] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:48.852 23:48:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.852 23:48:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:48.852 23:48:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 starting I/O failed: -6 00:11:49.112 Read completed with error (sct=0, sc=8) 00:11:49.112 Write completed with error (sct=0, sc=8) 00:11:49.112 [2024-07-15 23:48:04.174085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1029650 is same with the state(5) to be set 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 [2024-07-15 23:48:04.175425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026e90 is same with the state(5) to be set 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Write completed with error (sct=0, sc=8) 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 Read completed with error (sct=0, sc=8) 00:11:49.113 starting I/O failed: -6 00:11:49.113 starting I/O failed: -6 00:11:49.113 starting I/O failed: -6 00:11:49.113 starting I/O failed: -6 00:11:49.113 starting I/O failed: -6 00:11:49.113 starting I/O failed: -6 00:11:49.113 starting I/O failed: -6 00:11:49.113 starting I/O failed: -6 00:11:49.113 starting I/O failed: -6 00:11:49.113 starting I/O failed: -6 00:11:50.056 [2024-07-15 23:48:05.152331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1005500 is same with the state(5) to be set 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 [2024-07-15 23:48:05.177128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1025d00 is same with the state(5) to be set 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 [2024-07-15 23:48:05.177661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1026cb0 is same with the state(5) to be set 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 [2024-07-15 23:48:05.181930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3bbc00d740 is same with the state(5) to be set 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 Write completed with error (sct=0, sc=8) 00:11:50.056 Read completed with error (sct=0, sc=8) 00:11:50.056 [2024-07-15 23:48:05.182120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3bbc00cfe0 is same with the state(5) to be set 00:11:50.056 Initializing NVMe Controllers 00:11:50.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:50.056 Controller IO queue size 128, less than required. 00:11:50.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:50.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:50.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:50.056 Initialization complete. Launching workers. 00:11:50.056 ======================================================== 00:11:50.056 Latency(us) 00:11:50.056 Device Information : IOPS MiB/s Average min max 00:11:50.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 159.90 0.08 916053.40 562.28 1006838.79 00:11:50.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 179.82 0.09 921103.64 381.76 1010220.81 00:11:50.056 ======================================================== 00:11:50.056 Total : 339.72 0.17 918726.62 381.76 1010220.81 00:11:50.056 00:11:50.056 [2024-07-15 23:48:05.182662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1005500 (9): Bad file descriptor 00:11:50.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:50.056 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:50.056 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:50.056 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 346273 00:11:50.056 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 346273 00:11:50.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (346273) - No such process 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 346273 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # local es=0 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # valid_exec_arg wait 346273 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@630 -- # local arg=wait 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # type -t wait 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # wait 346273 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # es=1 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:50.626 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.627 [2024-07-15 23:48:05.715128] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=347182 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 347182 00:11:50.627 23:48:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:50.627 [2024-07-15 23:48:05.780429] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:51.197 23:48:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:51.197 23:48:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 347182 00:11:51.197 23:48:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:51.766 23:48:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:51.766 23:48:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 347182 00:11:51.766 23:48:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:52.078 23:48:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:52.078 23:48:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 347182 00:11:52.078 23:48:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:52.646 23:48:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:52.646 23:48:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 347182 00:11:52.646 23:48:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:53.215 23:48:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:53.215 23:48:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 347182 00:11:53.215 23:48:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:53.785 23:48:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:53.785 23:48:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 347182 00:11:53.785 23:48:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:54.046 Initializing NVMe Controllers 00:11:54.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.046 Controller IO queue size 128, less than required. 00:11:54.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:54.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:54.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:54.046 Initialization complete. Launching workers. 00:11:54.046 ======================================================== 00:11:54.046 Latency(us) 00:11:54.046 Device Information : IOPS MiB/s Average min max 00:11:54.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002098.32 1000276.25 1006109.47 00:11:54.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003122.27 1000307.70 1041540.46 00:11:54.046 ======================================================== 00:11:54.046 Total : 256.00 0.12 1002610.30 1000276.25 1041540.46 00:11:54.046 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 347182 00:11:54.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (347182) - No such process 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 347182 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:54.306 rmmod nvme_tcp 00:11:54.306 rmmod nvme_fabrics 00:11:54.306 rmmod nvme_keyring 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 346144 ']' 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 346144 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@942 -- # '[' -z 346144 ']' 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # kill -0 346144 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # uname 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 346144 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # echo 'killing process with pid 346144' 00:11:54.306 killing process with pid 346144 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@961 -- # kill 346144 00:11:54.306 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # wait 346144 00:11:54.565 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.565 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:54.565 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:54.565 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.565 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.565 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.565 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.565 23:48:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.478 23:48:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:56.478 00:11:56.478 real 0m19.060s 00:11:56.478 user 0m31.313s 00:11:56.478 sys 0m7.064s 00:11:56.478 23:48:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:56.478 23:48:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:56.478 ************************************ 00:11:56.478 END TEST nvmf_delete_subsystem 00:11:56.478 ************************************ 00:11:56.478 23:48:11 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:56.478 23:48:11 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:56.478 23:48:11 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:56.478 23:48:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:56.478 23:48:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:56.739 ************************************ 00:11:56.739 START TEST nvmf_ns_masking 00:11:56.739 ************************************ 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1117 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:56.739 * Looking for test storage... 00:11:56.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e003fe1b-1270-4150-accb-b458003ac69c 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7e8e7a96-7175-47a1-8088-9e128d9345c4 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=5309ea2e-f28d-42f3-97ac-8de955852a02 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:56.739 23:48:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:04.877 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.877 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:04.878 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:04.878 Found net devices under 0000:31:00.0: cvl_0_0 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:04.878 Found net devices under 0000:31:00.1: cvl_0_1 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:12:04.878 00:12:04.878 --- 10.0.0.2 ping statistics --- 00:12:04.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.878 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:12:04.878 00:12:04.878 --- 10.0.0.1 ping statistics --- 00:12:04.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.878 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=353045 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 353045 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 353045 ']' 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:04.878 23:48:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:04.878 [2024-07-15 23:48:20.009311] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:12:04.878 [2024-07-15 23:48:20.009375] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.139 [2024-07-15 23:48:20.091614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.139 [2024-07-15 23:48:20.170628] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.139 [2024-07-15 23:48:20.170671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.139 [2024-07-15 23:48:20.170684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.139 [2024-07-15 23:48:20.170691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.139 [2024-07-15 23:48:20.170696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.139 [2024-07-15 23:48:20.170715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.709 23:48:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:05.709 23:48:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:12:05.709 23:48:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.709 23:48:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:05.709 23:48:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:05.709 23:48:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.709 23:48:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:05.970 [2024-07-15 23:48:20.950139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.970 23:48:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:05.970 23:48:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:05.970 23:48:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:05.970 Malloc1 00:12:05.970 23:48:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:06.230 Malloc2 00:12:06.230 23:48:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.491 23:48:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:06.491 23:48:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.751 [2024-07-15 23:48:21.737027] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.751 23:48:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:06.751 23:48:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5309ea2e-f28d-42f3-97ac-8de955852a02 -a 10.0.0.2 -s 4420 -i 4 00:12:06.751 23:48:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.751 23:48:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:12:06.751 23:48:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.751 23:48:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:12:06.751 23:48:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.291 23:48:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.291 [ 0]:0x1 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54656e0354f248b4882af3a419c9e1e0 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54656e0354f248b4882af3a419c9e1e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.291 [ 0]:0x1 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54656e0354f248b4882af3a419c9e1e0 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54656e0354f248b4882af3a419c9e1e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:09.291 [ 1]:0x2 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acfc671d3c934443af2ea1b99ed03a2a 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acfc671d3c934443af2ea1b99ed03a2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:09.291 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.551 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.811 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:09.811 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:09.811 23:48:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5309ea2e-f28d-42f3-97ac-8de955852a02 -a 10.0.0.2 -s 4420 -i 4 00:12:10.070 23:48:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:10.070 23:48:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:12:10.070 23:48:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.071 23:48:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 1 ]] 00:12:10.071 23:48:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=1 00:12:10.071 23:48:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:12:11.979 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:12:11.980 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:12:11.980 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:11.980 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:12:11.980 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:11.980 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:12:11.980 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.980 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:12.240 [ 0]:0x2 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acfc671d3c934443af2ea1b99ed03a2a 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acfc671d3c934443af2ea1b99ed03a2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.240 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:12.499 [ 0]:0x1 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54656e0354f248b4882af3a419c9e1e0 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54656e0354f248b4882af3a419c9e1e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:12.499 [ 1]:0x2 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:12.499 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.500 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acfc671d3c934443af2ea1b99ed03a2a 00:12:12.500 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acfc671d3c934443af2ea1b99ed03a2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.500 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:12.759 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:12.760 [ 0]:0x2 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acfc671d3c934443af2ea1b99ed03a2a 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acfc671d3c934443af2ea1b99ed03a2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.760 23:48:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:13.020 23:48:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:13.020 23:48:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 5309ea2e-f28d-42f3-97ac-8de955852a02 -a 10.0.0.2 -s 4420 -i 4 00:12:13.280 23:48:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:13.280 23:48:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:12:13.281 23:48:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.281 23:48:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:12:13.281 23:48:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:12:13.281 23:48:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.193 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:15.193 [ 0]:0x1 00:12:15.194 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:15.194 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.194 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54656e0354f248b4882af3a419c9e1e0 00:12:15.194 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54656e0354f248b4882af3a419c9e1e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.194 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:15.454 [ 1]:0x2 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acfc671d3c934443af2ea1b99ed03a2a 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acfc671d3c934443af2ea1b99ed03a2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:15.454 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:15.714 [ 0]:0x2 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acfc671d3c934443af2ea1b99ed03a2a 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acfc671d3c934443af2ea1b99ed03a2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:15.714 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:15.714 [2024-07-15 23:48:30.895096] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:15.714 request: 00:12:15.714 { 00:12:15.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.714 "nsid": 2, 00:12:15.714 "host": "nqn.2016-06.io.spdk:host1", 00:12:15.714 "method": "nvmf_ns_remove_host", 00:12:15.714 "req_id": 1 00:12:15.714 } 00:12:15.714 Got JSON-RPC error response 00:12:15.714 response: 00:12:15.714 { 00:12:15.714 "code": -32602, 00:12:15.714 "message": "Invalid parameters" 00:12:15.714 } 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:15.974 23:48:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:15.974 [ 0]:0x2 00:12:15.974 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.974 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.974 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=acfc671d3c934443af2ea1b99ed03a2a 00:12:15.974 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ acfc671d3c934443af2ea1b99ed03a2a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.974 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:15.974 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=355261 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 355261 /var/tmp/host.sock 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 355261 ']' 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:16.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:16.234 23:48:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:16.234 [2024-07-15 23:48:31.265839] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:12:16.234 [2024-07-15 23:48:31.265891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355261 ] 00:12:16.234 [2024-07-15 23:48:31.348831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.234 [2024-07-15 23:48:31.414183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.192 23:48:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:17.192 23:48:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:12:17.192 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.192 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:17.192 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e003fe1b-1270-4150-accb-b458003ac69c 00:12:17.192 23:48:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:17.192 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E003FE1B12704150ACCBB458003AC69C -i 00:12:17.452 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7e8e7a96-7175-47a1-8088-9e128d9345c4 00:12:17.452 23:48:32 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:17.452 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7E8E7A96717547A180889E128D9345C4 -i 00:12:17.452 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:17.713 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:17.713 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:17.973 23:48:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:17.973 nvme0n1 00:12:18.233 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:18.233 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:18.492 nvme1n2 00:12:18.492 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:18.492 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:18.492 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:18.492 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:18.492 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:18.492 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:18.492 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:18.492 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:18.493 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:18.752 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e003fe1b-1270-4150-accb-b458003ac69c == \e\0\0\3\f\e\1\b\-\1\2\7\0\-\4\1\5\0\-\a\c\c\b\-\b\4\5\8\0\0\3\a\c\6\9\c ]] 00:12:18.752 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:18.752 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:18.752 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:19.012 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7e8e7a96-7175-47a1-8088-9e128d9345c4 == \7\e\8\e\7\a\9\6\-\7\1\7\5\-\4\7\a\1\-\8\0\8\8\-\9\e\1\2\8\d\9\3\4\5\c\4 ]] 00:12:19.012 23:48:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 355261 00:12:19.012 23:48:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 355261 ']' 00:12:19.012 23:48:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 355261 00:12:19.012 23:48:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:12:19.012 23:48:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:19.012 23:48:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 355261 00:12:19.012 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:12:19.012 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:12:19.012 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 355261' 00:12:19.012 killing process with pid 355261 00:12:19.012 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 355261 00:12:19.012 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 355261 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.271 rmmod nvme_tcp 00:12:19.271 rmmod nvme_fabrics 00:12:19.271 rmmod nvme_keyring 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 353045 ']' 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 353045 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 353045 ']' 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 353045 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:19.271 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 353045 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 353045' 00:12:19.532 killing process with pid 353045 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 353045 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 353045 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.532 23:48:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.075 23:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.075 00:12:22.075 real 0m25.054s 00:12:22.075 user 0m24.037s 00:12:22.075 sys 0m7.933s 00:12:22.075 23:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:22.076 23:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:22.076 ************************************ 00:12:22.076 END TEST nvmf_ns_masking 00:12:22.076 ************************************ 00:12:22.076 23:48:36 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:22.076 23:48:36 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:22.076 23:48:36 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:22.076 23:48:36 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:22.076 23:48:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:22.076 23:48:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.076 ************************************ 00:12:22.076 START TEST nvmf_nvme_cli 00:12:22.076 ************************************ 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:22.076 * Looking for test storage... 00:12:22.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:22.076 23:48:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.321 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:30.322 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:30.322 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:30.322 Found net devices under 0000:31:00.0: cvl_0_0 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:30.322 Found net devices under 0000:31:00.1: cvl_0_1 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.322 23:48:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:30.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:12:30.322 00:12:30.322 --- 10.0.0.2 ping statistics --- 00:12:30.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.322 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:12:30.322 00:12:30.322 --- 10.0.0.1 ping statistics --- 00:12:30.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.322 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=360633 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 360633 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@823 -- # '[' -z 360633 ']' 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:30.322 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.322 [2024-07-15 23:48:45.124981] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:12:30.322 [2024-07-15 23:48:45.125067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.322 [2024-07-15 23:48:45.206813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.322 [2024-07-15 23:48:45.283998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.322 [2024-07-15 23:48:45.284035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.322 [2024-07-15 23:48:45.284043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.322 [2024-07-15 23:48:45.284049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.322 [2024-07-15 23:48:45.284055] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.322 [2024-07-15 23:48:45.284122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.322 [2024-07-15 23:48:45.284262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.322 [2024-07-15 23:48:45.284407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.322 [2024-07-15 23:48:45.284408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # return 0 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.893 [2024-07-15 23:48:45.949807] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:30.893 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:30.894 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 Malloc0 00:12:30.894 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:30.894 23:48:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:30.894 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:30.894 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 Malloc1 00:12:30.894 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:30.894 23:48:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:30.894 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:30.894 23:48:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 [2024-07-15 23:48:46.039719] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:30.894 23:48:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:31.155 00:12:31.155 Discovery Log Number of Records 2, Generation counter 2 00:12:31.155 =====Discovery Log Entry 0====== 00:12:31.155 trtype: tcp 00:12:31.155 adrfam: ipv4 00:12:31.155 subtype: current discovery subsystem 00:12:31.155 treq: not required 00:12:31.155 portid: 0 00:12:31.155 trsvcid: 4420 00:12:31.155 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:31.155 traddr: 10.0.0.2 00:12:31.155 eflags: explicit discovery connections, duplicate discovery information 00:12:31.155 sectype: none 00:12:31.155 =====Discovery Log Entry 1====== 00:12:31.155 trtype: tcp 00:12:31.155 adrfam: ipv4 00:12:31.155 subtype: nvme subsystem 00:12:31.155 treq: not required 00:12:31.155 portid: 0 00:12:31.155 trsvcid: 4420 00:12:31.155 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:31.155 traddr: 10.0.0.2 00:12:31.155 eflags: none 00:12:31.155 sectype: none 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:31.155 23:48:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.067 23:48:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:33.067 23:48:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1192 -- # local i=0 00:12:33.067 23:48:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.067 23:48:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:12:33.067 23:48:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:12:33.067 23:48:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # sleep 2 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # return 0 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:34.981 /dev/nvme0n1 ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1213 -- # local i=0 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # return 0 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.981 23:48:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.981 rmmod nvme_tcp 00:12:34.981 rmmod nvme_fabrics 00:12:34.981 rmmod nvme_keyring 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 360633 ']' 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 360633 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@942 -- # '[' -z 360633 ']' 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # kill -0 360633 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # uname 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 360633 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # echo 'killing process with pid 360633' 00:12:34.981 killing process with pid 360633 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@961 -- # kill 360633 00:12:34.981 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # wait 360633 00:12:35.242 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.242 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.242 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.242 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.242 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.242 23:48:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.242 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.242 23:48:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.156 23:48:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.156 00:12:37.156 real 0m15.509s 00:12:37.156 user 0m22.166s 00:12:37.156 sys 0m6.510s 00:12:37.156 23:48:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:37.156 23:48:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:37.156 ************************************ 00:12:37.156 END TEST nvmf_nvme_cli 00:12:37.156 ************************************ 00:12:37.419 23:48:52 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:37.419 23:48:52 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:37.419 23:48:52 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:37.419 23:48:52 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:37.419 23:48:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:37.419 23:48:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.419 ************************************ 00:12:37.419 START TEST nvmf_vfio_user 00:12:37.419 ************************************ 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:37.419 * Looking for test storage... 00:12:37.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:37.419 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=362382 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 362382' 00:12:37.420 Process pid: 362382 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 362382 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@823 -- # '[' -z 362382 ']' 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:37.420 23:48:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:37.420 [2024-07-15 23:48:52.590490] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:12:37.420 [2024-07-15 23:48:52.590546] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.682 [2024-07-15 23:48:52.657878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.682 [2024-07-15 23:48:52.722843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.682 [2024-07-15 23:48:52.722883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.682 [2024-07-15 23:48:52.722891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.682 [2024-07-15 23:48:52.722897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.682 [2024-07-15 23:48:52.722902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.682 [2024-07-15 23:48:52.723041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.682 [2024-07-15 23:48:52.723163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.682 [2024-07-15 23:48:52.723319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.682 [2024-07-15 23:48:52.723461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.252 23:48:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:38.252 23:48:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # return 0 00:12:38.252 23:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:39.193 23:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:39.454 23:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:39.454 23:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:39.454 23:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:39.454 23:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:39.454 23:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:39.715 Malloc1 00:12:39.715 23:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:39.715 23:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:39.974 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:40.233 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:40.233 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:40.233 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:40.233 Malloc2 00:12:40.493 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:40.493 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:40.753 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:40.753 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:40.753 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:41.014 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:41.014 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:41.014 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:41.014 23:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:41.014 [2024-07-15 23:48:55.971630] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:12:41.014 [2024-07-15 23:48:55.971675] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363072 ] 00:12:41.014 [2024-07-15 23:48:56.004780] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:41.015 [2024-07-15 23:48:56.010089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:41.015 [2024-07-15 23:48:56.010107] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff9aecaf000 00:12:41.015 [2024-07-15 23:48:56.011084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.015 [2024-07-15 23:48:56.012085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.015 [2024-07-15 23:48:56.013089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.015 [2024-07-15 23:48:56.014095] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.015 [2024-07-15 23:48:56.015106] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.015 [2024-07-15 23:48:56.016106] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.015 [2024-07-15 23:48:56.017111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:41.015 [2024-07-15 23:48:56.018119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:41.015 [2024-07-15 23:48:56.019131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:41.015 [2024-07-15 23:48:56.019140] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff9aeca4000 00:12:41.015 [2024-07-15 23:48:56.020467] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:41.015 [2024-07-15 23:48:56.037386] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:41.015 [2024-07-15 23:48:56.037412] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:41.015 [2024-07-15 23:48:56.042278] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:41.015 [2024-07-15 23:48:56.042322] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:41.015 [2024-07-15 23:48:56.042412] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:41.015 [2024-07-15 23:48:56.042432] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:41.015 [2024-07-15 23:48:56.042437] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:41.015 [2024-07-15 23:48:56.043277] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:41.015 [2024-07-15 23:48:56.043286] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:41.015 [2024-07-15 23:48:56.043294] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:41.015 [2024-07-15 23:48:56.044286] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:41.015 [2024-07-15 23:48:56.044295] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:41.015 [2024-07-15 23:48:56.044302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:41.015 [2024-07-15 23:48:56.045292] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:41.015 [2024-07-15 23:48:56.045301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:41.015 [2024-07-15 23:48:56.046295] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:41.015 [2024-07-15 23:48:56.046304] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:41.015 [2024-07-15 23:48:56.046308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:41.015 [2024-07-15 23:48:56.046315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:41.015 [2024-07-15 23:48:56.046420] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:41.015 [2024-07-15 23:48:56.046425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:41.015 [2024-07-15 23:48:56.046430] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:41.015 [2024-07-15 23:48:56.047302] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:41.015 [2024-07-15 23:48:56.048308] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:41.015 [2024-07-15 23:48:56.049313] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:41.015 [2024-07-15 23:48:56.050307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.015 [2024-07-15 23:48:56.050370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:41.015 [2024-07-15 23:48:56.051323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:41.015 [2024-07-15 23:48:56.051331] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:41.015 [2024-07-15 23:48:56.051336] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051357] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:41.015 [2024-07-15 23:48:56.051364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051379] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.015 [2024-07-15 23:48:56.051385] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.015 [2024-07-15 23:48:56.051397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.015 [2024-07-15 23:48:56.051433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:41.015 [2024-07-15 23:48:56.051442] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:41.015 [2024-07-15 23:48:56.051449] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:41.015 [2024-07-15 23:48:56.051456] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:41.015 [2024-07-15 23:48:56.051460] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:41.015 [2024-07-15 23:48:56.051465] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:41.015 [2024-07-15 23:48:56.051469] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:41.015 [2024-07-15 23:48:56.051474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:41.015 [2024-07-15 23:48:56.051505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:41.015 [2024-07-15 23:48:56.051518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.015 [2024-07-15 23:48:56.051526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.015 [2024-07-15 23:48:56.051535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.015 [2024-07-15 23:48:56.051543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.015 [2024-07-15 23:48:56.051547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051556] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:41.015 [2024-07-15 23:48:56.051572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:41.015 [2024-07-15 23:48:56.051578] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:41.015 [2024-07-15 23:48:56.051583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051604] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.015 [2024-07-15 23:48:56.051616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:41.015 [2024-07-15 23:48:56.051675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:41.015 [2024-07-15 23:48:56.051690] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:41.015 [2024-07-15 23:48:56.051696] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:41.015 [2024-07-15 23:48:56.051702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:41.015 [2024-07-15 23:48:56.051713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:41.015 [2024-07-15 23:48:56.051723] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:41.020 [2024-07-15 23:48:56.051735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051743] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051749] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.020 [2024-07-15 23:48:56.051754] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.020 [2024-07-15 23:48:56.051760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.020 [2024-07-15 23:48:56.051773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:41.020 [2024-07-15 23:48:56.051786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051801] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:41.020 [2024-07-15 23:48:56.051805] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.020 [2024-07-15 23:48:56.051811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.020 [2024-07-15 23:48:56.051820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:41.020 [2024-07-15 23:48:56.051828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051858] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051863] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:41.020 [2024-07-15 23:48:56.051868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:41.020 [2024-07-15 23:48:56.051873] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:41.020 [2024-07-15 23:48:56.051894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:41.020 [2024-07-15 23:48:56.051904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:41.020 [2024-07-15 23:48:56.051915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:41.020 [2024-07-15 23:48:56.051925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:41.020 [2024-07-15 23:48:56.051935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:41.020 [2024-07-15 23:48:56.051942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:41.020 [2024-07-15 23:48:56.051953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:41.020 [2024-07-15 23:48:56.051960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:41.020 [2024-07-15 23:48:56.051973] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:41.020 [2024-07-15 23:48:56.051977] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:41.020 [2024-07-15 23:48:56.051981] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:41.020 [2024-07-15 23:48:56.051984] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:41.020 [2024-07-15 23:48:56.051990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:41.021 [2024-07-15 23:48:56.051998] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:41.021 [2024-07-15 23:48:56.052002] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:41.021 [2024-07-15 23:48:56.052008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:41.021 [2024-07-15 23:48:56.052015] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:41.021 [2024-07-15 23:48:56.052019] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:41.021 [2024-07-15 23:48:56.052025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:41.021 [2024-07-15 23:48:56.052033] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:41.021 [2024-07-15 23:48:56.052037] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:41.021 [2024-07-15 23:48:56.052043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:41.021 [2024-07-15 23:48:56.052050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:41.021 [2024-07-15 23:48:56.052062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:41.021 [2024-07-15 23:48:56.052072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:41.021 [2024-07-15 23:48:56.052079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:41.021 ===================================================== 00:12:41.021 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.021 ===================================================== 00:12:41.021 Controller Capabilities/Features 00:12:41.021 ================================ 00:12:41.021 Vendor ID: 4e58 00:12:41.021 Subsystem Vendor ID: 4e58 00:12:41.021 Serial Number: SPDK1 00:12:41.021 Model Number: SPDK bdev Controller 00:12:41.021 Firmware Version: 24.09 00:12:41.021 Recommended Arb Burst: 6 00:12:41.021 IEEE OUI Identifier: 8d 6b 50 00:12:41.021 Multi-path I/O 00:12:41.021 May have multiple subsystem ports: Yes 00:12:41.021 May have multiple controllers: Yes 00:12:41.021 Associated with SR-IOV VF: No 00:12:41.021 Max Data Transfer Size: 131072 00:12:41.021 Max Number of Namespaces: 32 00:12:41.021 Max Number of I/O Queues: 127 00:12:41.021 NVMe Specification Version (VS): 1.3 00:12:41.021 NVMe Specification Version (Identify): 1.3 00:12:41.021 Maximum Queue Entries: 256 00:12:41.021 Contiguous Queues Required: Yes 00:12:41.021 Arbitration Mechanisms Supported 00:12:41.021 Weighted Round Robin: Not Supported 00:12:41.021 Vendor Specific: Not Supported 00:12:41.021 Reset Timeout: 15000 ms 00:12:41.021 Doorbell Stride: 4 bytes 00:12:41.021 NVM Subsystem Reset: Not Supported 00:12:41.021 Command Sets Supported 00:12:41.021 NVM Command Set: Supported 00:12:41.021 Boot Partition: Not Supported 00:12:41.021 Memory Page Size Minimum: 4096 bytes 00:12:41.021 Memory Page Size Maximum: 4096 bytes 00:12:41.021 Persistent Memory Region: Not Supported 00:12:41.021 Optional Asynchronous Events Supported 00:12:41.021 Namespace Attribute Notices: Supported 00:12:41.021 Firmware Activation Notices: Not Supported 00:12:41.021 ANA Change Notices: Not Supported 00:12:41.021 PLE Aggregate Log Change Notices: Not Supported 00:12:41.021 LBA Status Info Alert Notices: Not Supported 00:12:41.021 EGE Aggregate Log Change Notices: Not Supported 00:12:41.021 Normal NVM Subsystem Shutdown event: Not Supported 00:12:41.021 Zone Descriptor Change Notices: Not Supported 00:12:41.021 Discovery Log Change Notices: Not Supported 00:12:41.021 Controller Attributes 00:12:41.021 128-bit Host Identifier: Supported 00:12:41.021 Non-Operational Permissive Mode: Not Supported 00:12:41.021 NVM Sets: Not Supported 00:12:41.021 Read Recovery Levels: Not Supported 00:12:41.021 Endurance Groups: Not Supported 00:12:41.021 Predictable Latency Mode: Not Supported 00:12:41.021 Traffic Based Keep ALive: Not Supported 00:12:41.021 Namespace Granularity: Not Supported 00:12:41.021 SQ Associations: Not Supported 00:12:41.021 UUID List: Not Supported 00:12:41.021 Multi-Domain Subsystem: Not Supported 00:12:41.021 Fixed Capacity Management: Not Supported 00:12:41.021 Variable Capacity Management: Not Supported 00:12:41.021 Delete Endurance Group: Not Supported 00:12:41.021 Delete NVM Set: Not Supported 00:12:41.021 Extended LBA Formats Supported: Not Supported 00:12:41.021 Flexible Data Placement Supported: Not Supported 00:12:41.021 00:12:41.021 Controller Memory Buffer Support 00:12:41.021 ================================ 00:12:41.021 Supported: No 00:12:41.021 00:12:41.021 Persistent Memory Region Support 00:12:41.021 ================================ 00:12:41.021 Supported: No 00:12:41.021 00:12:41.021 Admin Command Set Attributes 00:12:41.021 ============================ 00:12:41.021 Security Send/Receive: Not Supported 00:12:41.021 Format NVM: Not Supported 00:12:41.021 Firmware Activate/Download: Not Supported 00:12:41.021 Namespace Management: Not Supported 00:12:41.021 Device Self-Test: Not Supported 00:12:41.021 Directives: Not Supported 00:12:41.021 NVMe-MI: Not Supported 00:12:41.021 Virtualization Management: Not Supported 00:12:41.021 Doorbell Buffer Config: Not Supported 00:12:41.021 Get LBA Status Capability: Not Supported 00:12:41.021 Command & Feature Lockdown Capability: Not Supported 00:12:41.021 Abort Command Limit: 4 00:12:41.021 Async Event Request Limit: 4 00:12:41.021 Number of Firmware Slots: N/A 00:12:41.021 Firmware Slot 1 Read-Only: N/A 00:12:41.021 Firmware Activation Without Reset: N/A 00:12:41.021 Multiple Update Detection Support: N/A 00:12:41.021 Firmware Update Granularity: No Information Provided 00:12:41.021 Per-Namespace SMART Log: No 00:12:41.021 Asymmetric Namespace Access Log Page: Not Supported 00:12:41.021 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:41.021 Command Effects Log Page: Supported 00:12:41.021 Get Log Page Extended Data: Supported 00:12:41.021 Telemetry Log Pages: Not Supported 00:12:41.021 Persistent Event Log Pages: Not Supported 00:12:41.021 Supported Log Pages Log Page: May Support 00:12:41.021 Commands Supported & Effects Log Page: Not Supported 00:12:41.021 Feature Identifiers & Effects Log Page:May Support 00:12:41.021 NVMe-MI Commands & Effects Log Page: May Support 00:12:41.021 Data Area 4 for Telemetry Log: Not Supported 00:12:41.021 Error Log Page Entries Supported: 128 00:12:41.021 Keep Alive: Supported 00:12:41.021 Keep Alive Granularity: 10000 ms 00:12:41.021 00:12:41.021 NVM Command Set Attributes 00:12:41.021 ========================== 00:12:41.021 Submission Queue Entry Size 00:12:41.021 Max: 64 00:12:41.021 Min: 64 00:12:41.021 Completion Queue Entry Size 00:12:41.021 Max: 16 00:12:41.021 Min: 16 00:12:41.021 Number of Namespaces: 32 00:12:41.021 Compare Command: Supported 00:12:41.021 Write Uncorrectable Command: Not Supported 00:12:41.021 Dataset Management Command: Supported 00:12:41.021 Write Zeroes Command: Supported 00:12:41.021 Set Features Save Field: Not Supported 00:12:41.021 Reservations: Not Supported 00:12:41.021 Timestamp: Not Supported 00:12:41.021 Copy: Supported 00:12:41.021 Volatile Write Cache: Present 00:12:41.021 Atomic Write Unit (Normal): 1 00:12:41.021 Atomic Write Unit (PFail): 1 00:12:41.021 Atomic Compare & Write Unit: 1 00:12:41.021 Fused Compare & Write: Supported 00:12:41.021 Scatter-Gather List 00:12:41.021 SGL Command Set: Supported (Dword aligned) 00:12:41.021 SGL Keyed: Not Supported 00:12:41.021 SGL Bit Bucket Descriptor: Not Supported 00:12:41.021 SGL Metadata Pointer: Not Supported 00:12:41.021 Oversized SGL: Not Supported 00:12:41.021 SGL Metadata Address: Not Supported 00:12:41.021 SGL Offset: Not Supported 00:12:41.021 Transport SGL Data Block: Not Supported 00:12:41.021 Replay Protected Memory Block: Not Supported 00:12:41.021 00:12:41.021 Firmware Slot Information 00:12:41.021 ========================= 00:12:41.021 Active slot: 1 00:12:41.021 Slot 1 Firmware Revision: 24.09 00:12:41.021 00:12:41.021 00:12:41.021 Commands Supported and Effects 00:12:41.021 ============================== 00:12:41.021 Admin Commands 00:12:41.021 -------------- 00:12:41.021 Get Log Page (02h): Supported 00:12:41.021 Identify (06h): Supported 00:12:41.021 Abort (08h): Supported 00:12:41.021 Set Features (09h): Supported 00:12:41.021 Get Features (0Ah): Supported 00:12:41.021 Asynchronous Event Request (0Ch): Supported 00:12:41.021 Keep Alive (18h): Supported 00:12:41.021 I/O Commands 00:12:41.021 ------------ 00:12:41.021 Flush (00h): Supported LBA-Change 00:12:41.021 Write (01h): Supported LBA-Change 00:12:41.021 Read (02h): Supported 00:12:41.021 Compare (05h): Supported 00:12:41.021 Write Zeroes (08h): Supported LBA-Change 00:12:41.021 Dataset Management (09h): Supported LBA-Change 00:12:41.021 Copy (19h): Supported LBA-Change 00:12:41.021 00:12:41.021 Error Log 00:12:41.021 ========= 00:12:41.021 00:12:41.021 Arbitration 00:12:41.021 =========== 00:12:41.021 Arbitration Burst: 1 00:12:41.021 00:12:41.021 Power Management 00:12:41.021 ================ 00:12:41.021 Number of Power States: 1 00:12:41.021 Current Power State: Power State #0 00:12:41.021 Power State #0: 00:12:41.021 Max Power: 0.00 W 00:12:41.021 Non-Operational State: Operational 00:12:41.021 Entry Latency: Not Reported 00:12:41.021 Exit Latency: Not Reported 00:12:41.022 Relative Read Throughput: 0 00:12:41.022 Relative Read Latency: 0 00:12:41.022 Relative Write Throughput: 0 00:12:41.022 Relative Write Latency: 0 00:12:41.022 Idle Power: Not Reported 00:12:41.022 Active Power: Not Reported 00:12:41.022 Non-Operational Permissive Mode: Not Supported 00:12:41.022 00:12:41.022 Health Information 00:12:41.022 ================== 00:12:41.022 Critical Warnings: 00:12:41.022 Available Spare Space: OK 00:12:41.022 Temperature: OK 00:12:41.022 Device Reliability: OK 00:12:41.022 Read Only: No 00:12:41.022 Volatile Memory Backup: OK 00:12:41.022 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:41.022 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:41.022 Available Spare: 0% 00:12:41.022 Available Sp[2024-07-15 23:48:56.052179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:41.022 [2024-07-15 23:48:56.052188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:41.022 [2024-07-15 23:48:56.052219] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:41.022 [2024-07-15 23:48:56.052228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.022 [2024-07-15 23:48:56.052240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.022 [2024-07-15 23:48:56.052246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.022 [2024-07-15 23:48:56.052252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.022 [2024-07-15 23:48:56.052329] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:41.022 [2024-07-15 23:48:56.052340] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:41.022 [2024-07-15 23:48:56.053332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.022 [2024-07-15 23:48:56.053371] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:41.022 [2024-07-15 23:48:56.053378] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:41.022 [2024-07-15 23:48:56.054338] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:41.022 [2024-07-15 23:48:56.054350] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:41.022 [2024-07-15 23:48:56.054419] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:41.022 [2024-07-15 23:48:56.058238] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:41.022 are Threshold: 0% 00:12:41.022 Life Percentage Used: 0% 00:12:41.022 Data Units Read: 0 00:12:41.022 Data Units Written: 0 00:12:41.022 Host Read Commands: 0 00:12:41.022 Host Write Commands: 0 00:12:41.022 Controller Busy Time: 0 minutes 00:12:41.022 Power Cycles: 0 00:12:41.022 Power On Hours: 0 hours 00:12:41.022 Unsafe Shutdowns: 0 00:12:41.022 Unrecoverable Media Errors: 0 00:12:41.022 Lifetime Error Log Entries: 0 00:12:41.022 Warning Temperature Time: 0 minutes 00:12:41.022 Critical Temperature Time: 0 minutes 00:12:41.022 00:12:41.022 Number of Queues 00:12:41.022 ================ 00:12:41.022 Number of I/O Submission Queues: 127 00:12:41.022 Number of I/O Completion Queues: 127 00:12:41.022 00:12:41.022 Active Namespaces 00:12:41.022 ================= 00:12:41.022 Namespace ID:1 00:12:41.022 Error Recovery Timeout: Unlimited 00:12:41.022 Command Set Identifier: NVM (00h) 00:12:41.022 Deallocate: Supported 00:12:41.022 Deallocated/Unwritten Error: Not Supported 00:12:41.022 Deallocated Read Value: Unknown 00:12:41.022 Deallocate in Write Zeroes: Not Supported 00:12:41.022 Deallocated Guard Field: 0xFFFF 00:12:41.022 Flush: Supported 00:12:41.022 Reservation: Supported 00:12:41.022 Namespace Sharing Capabilities: Multiple Controllers 00:12:41.022 Size (in LBAs): 131072 (0GiB) 00:12:41.022 Capacity (in LBAs): 131072 (0GiB) 00:12:41.022 Utilization (in LBAs): 131072 (0GiB) 00:12:41.022 NGUID: 823FBF1FE7324283B29469205B01C824 00:12:41.022 UUID: 823fbf1f-e732-4283-b294-69205b01c824 00:12:41.022 Thin Provisioning: Not Supported 00:12:41.022 Per-NS Atomic Units: Yes 00:12:41.022 Atomic Boundary Size (Normal): 0 00:12:41.022 Atomic Boundary Size (PFail): 0 00:12:41.022 Atomic Boundary Offset: 0 00:12:41.022 Maximum Single Source Range Length: 65535 00:12:41.022 Maximum Copy Length: 65535 00:12:41.022 Maximum Source Range Count: 1 00:12:41.022 NGUID/EUI64 Never Reused: No 00:12:41.022 Namespace Write Protected: No 00:12:41.022 Number of LBA Formats: 1 00:12:41.022 Current LBA Format: LBA Format #00 00:12:41.022 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:41.022 00:12:41.022 23:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:41.282 [2024-07-15 23:48:56.241865] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.592 Initializing NVMe Controllers 00:12:46.592 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:46.592 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:46.592 Initialization complete. Launching workers. 00:12:46.592 ======================================================== 00:12:46.592 Latency(us) 00:12:46.592 Device Information : IOPS MiB/s Average min max 00:12:46.592 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39953.05 156.07 3203.64 837.49 6817.36 00:12:46.592 ======================================================== 00:12:46.592 Total : 39953.05 156.07 3203.64 837.49 6817.36 00:12:46.592 00:12:46.592 [2024-07-15 23:49:01.262209] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.592 23:49:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:46.592 [2024-07-15 23:49:01.446080] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.882 Initializing NVMe Controllers 00:12:51.882 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:51.882 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:51.882 Initialization complete. Launching workers. 00:12:51.882 ======================================================== 00:12:51.882 Latency(us) 00:12:51.882 Device Information : IOPS MiB/s Average min max 00:12:51.882 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.74 5984.26 9977.67 00:12:51.882 ======================================================== 00:12:51.882 Total : 16051.20 62.70 7980.74 5984.26 9977.67 00:12:51.882 00:12:51.882 [2024-07-15 23:49:06.479929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.882 23:49:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:51.882 [2024-07-15 23:49:06.682839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:57.173 [2024-07-15 23:49:11.761476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:57.173 Initializing NVMe Controllers 00:12:57.173 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:57.173 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:57.173 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:57.173 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:57.173 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:57.173 Initialization complete. Launching workers. 00:12:57.173 Starting thread on core 2 00:12:57.173 Starting thread on core 3 00:12:57.173 Starting thread on core 1 00:12:57.173 23:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:57.173 [2024-07-15 23:49:12.030080] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:01.374 [2024-07-15 23:49:15.699404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:01.374 Initializing NVMe Controllers 00:13:01.374 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.374 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.374 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:01.374 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:01.374 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:01.374 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:01.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:01.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:01.374 Initialization complete. Launching workers. 00:13:01.374 Starting thread on core 1 with urgent priority queue 00:13:01.374 Starting thread on core 2 with urgent priority queue 00:13:01.374 Starting thread on core 3 with urgent priority queue 00:13:01.374 Starting thread on core 0 with urgent priority queue 00:13:01.374 SPDK bdev Controller (SPDK1 ) core 0: 8311.00 IO/s 12.03 secs/100000 ios 00:13:01.374 SPDK bdev Controller (SPDK1 ) core 1: 11854.00 IO/s 8.44 secs/100000 ios 00:13:01.374 SPDK bdev Controller (SPDK1 ) core 2: 9041.33 IO/s 11.06 secs/100000 ios 00:13:01.374 SPDK bdev Controller (SPDK1 ) core 3: 8305.33 IO/s 12.04 secs/100000 ios 00:13:01.374 ======================================================== 00:13:01.374 00:13:01.374 23:49:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:01.374 [2024-07-15 23:49:15.970656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:01.374 Initializing NVMe Controllers 00:13:01.374 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.374 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:01.374 Namespace ID: 1 size: 0GB 00:13:01.374 Initialization complete. 00:13:01.374 INFO: using host memory buffer for IO 00:13:01.374 Hello world! 00:13:01.374 [2024-07-15 23:49:16.004847] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:01.374 23:49:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:01.374 [2024-07-15 23:49:16.275641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.317 Initializing NVMe Controllers 00:13:02.317 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.317 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.317 Initialization complete. Launching workers. 00:13:02.317 submit (in ns) avg, min, max = 7174.3, 3915.8, 4000607.5 00:13:02.317 complete (in ns) avg, min, max = 17648.1, 2392.5, 4001642.5 00:13:02.317 00:13:02.317 Submit histogram 00:13:02.317 ================ 00:13:02.317 Range in us Cumulative Count 00:13:02.317 3.893 - 3.920: 0.3485% ( 67) 00:13:02.317 3.920 - 3.947: 4.2440% ( 749) 00:13:02.317 3.947 - 3.973: 13.0806% ( 1699) 00:13:02.317 3.973 - 4.000: 24.5904% ( 2213) 00:13:02.317 4.000 - 4.027: 36.8544% ( 2358) 00:13:02.317 4.027 - 4.053: 50.3251% ( 2590) 00:13:02.317 4.053 - 4.080: 67.3584% ( 3275) 00:13:02.317 4.080 - 4.107: 81.9213% ( 2800) 00:13:02.317 4.107 - 4.133: 91.4183% ( 1826) 00:13:02.317 4.133 - 4.160: 96.6245% ( 1001) 00:13:02.317 4.160 - 4.187: 98.5437% ( 369) 00:13:02.317 4.187 - 4.213: 99.2094% ( 128) 00:13:02.317 4.213 - 4.240: 99.4279% ( 42) 00:13:02.317 4.240 - 4.267: 99.4643% ( 7) 00:13:02.317 4.267 - 4.293: 99.4799% ( 3) 00:13:02.317 4.293 - 4.320: 99.4903% ( 2) 00:13:02.317 4.320 - 4.347: 99.4955% ( 1) 00:13:02.317 4.373 - 4.400: 99.5007% ( 1) 00:13:02.317 4.427 - 4.453: 99.5059% ( 1) 00:13:02.317 4.480 - 4.507: 99.5111% ( 1) 00:13:02.317 4.640 - 4.667: 99.5163% ( 1) 00:13:02.317 4.667 - 4.693: 99.5215% ( 1) 00:13:02.317 4.933 - 4.960: 99.5267% ( 1) 00:13:02.317 4.987 - 5.013: 99.5319% ( 1) 00:13:02.317 5.093 - 5.120: 99.5371% ( 1) 00:13:02.317 5.173 - 5.200: 99.5423% ( 1) 00:13:02.317 5.227 - 5.253: 99.5475% ( 1) 00:13:02.317 5.333 - 5.360: 99.5527% ( 1) 00:13:02.317 5.387 - 5.413: 99.5579% ( 1) 00:13:02.317 5.467 - 5.493: 99.5631% ( 1) 00:13:02.317 5.520 - 5.547: 99.5683% ( 1) 00:13:02.317 5.787 - 5.813: 99.5735% ( 1) 00:13:02.317 5.813 - 5.840: 99.5839% ( 2) 00:13:02.317 5.920 - 5.947: 99.5943% ( 2) 00:13:02.317 5.947 - 5.973: 99.5995% ( 1) 00:13:02.317 6.000 - 6.027: 99.6099% ( 2) 00:13:02.317 6.027 - 6.053: 99.6255% ( 3) 00:13:02.317 6.053 - 6.080: 99.6359% ( 2) 00:13:02.317 6.080 - 6.107: 99.6567% ( 4) 00:13:02.317 6.133 - 6.160: 99.6723% ( 3) 00:13:02.317 6.160 - 6.187: 99.6879% ( 3) 00:13:02.317 6.187 - 6.213: 99.6983% ( 2) 00:13:02.318 6.213 - 6.240: 99.7035% ( 1) 00:13:02.318 6.240 - 6.267: 99.7139% ( 2) 00:13:02.318 6.267 - 6.293: 99.7295% ( 3) 00:13:02.318 6.293 - 6.320: 99.7399% ( 2) 00:13:02.318 6.320 - 6.347: 99.7504% ( 2) 00:13:02.318 6.347 - 6.373: 99.7556% ( 1) 00:13:02.318 6.373 - 6.400: 99.7764% ( 4) 00:13:02.318 6.453 - 6.480: 99.7816% ( 1) 00:13:02.318 6.480 - 6.507: 99.7920% ( 2) 00:13:02.318 6.560 - 6.587: 99.7972% ( 1) 00:13:02.318 6.587 - 6.613: 99.8024% ( 1) 00:13:02.318 6.613 - 6.640: 99.8128% ( 2) 00:13:02.318 6.640 - 6.667: 99.8232% ( 2) 00:13:02.318 6.773 - 6.800: 99.8284% ( 1) 00:13:02.318 6.827 - 6.880: 99.8388% ( 2) 00:13:02.318 6.933 - 6.987: 99.8440% ( 1) 00:13:02.318 7.093 - 7.147: 99.8492% ( 1) 00:13:02.318 7.200 - 7.253: 99.8544% ( 1) 00:13:02.318 7.253 - 7.307: 99.8596% ( 1) 00:13:02.318 7.413 - 7.467: 99.8648% ( 1) 00:13:02.318 7.467 - 7.520: 99.8700% ( 1) 00:13:02.318 7.573 - 7.627: 99.8804% ( 2) 00:13:02.318 7.627 - 7.680: 99.8856% ( 1) 00:13:02.318 7.840 - 7.893: 99.8960% ( 2) 00:13:02.318 8.000 - 8.053: 99.9012% ( 1) 00:13:02.318 8.160 - 8.213: 99.9064% ( 1) 00:13:02.318 8.213 - 8.267: 99.9116% ( 1) 00:13:02.318 8.533 - 8.587: 99.9168% ( 1) 00:13:02.318 11.520 - 11.573: 99.9220% ( 1) 00:13:02.318 3986.773 - 4014.080: 100.0000% ( 15) 00:13:02.318 00:13:02.318 Complete histogram 00:13:02.318 ================== 00:13:02.318 Range in us Cumulative Count 00:13:02.318 2.387 - 2.400: 0.0052% ( 1) 00:13:02.318 2.400 - 2.413: 0.5669% ( 108) 00:13:02.318 2.413 - 2.427: 0.8530% ( 55) 00:13:02.318 2.427 - 2.440: 1.0454% ( 37) 00:13:02.318 2.440 - 2.453: 1.1546% ( 21) 00:13:02.318 2.453 - [2024-07-15 23:49:17.302144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.318 2.467: 50.1690% ( 9424) 00:13:02.318 2.467 - 2.480: 67.7225% ( 3375) 00:13:02.318 2.480 - 2.493: 81.0111% ( 2555) 00:13:02.318 2.493 - 2.507: 88.8490% ( 1507) 00:13:02.318 2.507 - 2.520: 91.4339% ( 497) 00:13:02.318 2.520 - 2.533: 93.6027% ( 417) 00:13:02.318 2.533 - 2.547: 96.0992% ( 480) 00:13:02.318 2.547 - 2.560: 97.6387% ( 296) 00:13:02.318 2.560 - 2.573: 98.6113% ( 187) 00:13:02.318 2.573 - 2.587: 99.1626% ( 106) 00:13:02.318 2.587 - 2.600: 99.3863% ( 43) 00:13:02.318 2.600 - 2.613: 99.4123% ( 5) 00:13:02.318 2.613 - 2.627: 99.4175% ( 1) 00:13:02.318 2.627 - 2.640: 99.4227% ( 1) 00:13:02.318 2.800 - 2.813: 99.4279% ( 1) 00:13:02.318 4.187 - 4.213: 99.4331% ( 1) 00:13:02.318 4.240 - 4.267: 99.4435% ( 2) 00:13:02.318 4.267 - 4.293: 99.4539% ( 2) 00:13:02.318 4.320 - 4.347: 99.4643% ( 2) 00:13:02.318 4.427 - 4.453: 99.4695% ( 1) 00:13:02.318 4.453 - 4.480: 99.4747% ( 1) 00:13:02.318 4.533 - 4.560: 99.4799% ( 1) 00:13:02.318 4.560 - 4.587: 99.4851% ( 1) 00:13:02.318 4.587 - 4.613: 99.4903% ( 1) 00:13:02.318 4.613 - 4.640: 99.5007% ( 2) 00:13:02.318 4.667 - 4.693: 99.5163% ( 3) 00:13:02.318 4.693 - 4.720: 99.5215% ( 1) 00:13:02.318 4.747 - 4.773: 99.5267% ( 1) 00:13:02.318 4.800 - 4.827: 99.5371% ( 2) 00:13:02.318 4.880 - 4.907: 99.5423% ( 1) 00:13:02.318 5.093 - 5.120: 99.5475% ( 1) 00:13:02.318 5.200 - 5.227: 99.5527% ( 1) 00:13:02.318 5.387 - 5.413: 99.5631% ( 2) 00:13:02.318 5.467 - 5.493: 99.5683% ( 1) 00:13:02.318 5.680 - 5.707: 99.5735% ( 1) 00:13:02.318 5.707 - 5.733: 99.5787% ( 1) 00:13:02.318 5.760 - 5.787: 99.5839% ( 1) 00:13:02.318 5.787 - 5.813: 99.5891% ( 1) 00:13:02.318 5.840 - 5.867: 99.5943% ( 1) 00:13:02.318 6.000 - 6.027: 99.5995% ( 1) 00:13:02.318 6.027 - 6.053: 99.6047% ( 1) 00:13:02.318 6.293 - 6.320: 99.6099% ( 1) 00:13:02.318 9.600 - 9.653: 99.6151% ( 1) 00:13:02.318 10.933 - 10.987: 99.6203% ( 1) 00:13:02.318 3986.773 - 4014.080: 100.0000% ( 73) 00:13:02.318 00:13:02.318 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:02.318 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:02.318 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:02.318 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:02.318 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:02.318 [ 00:13:02.318 { 00:13:02.318 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:02.318 "subtype": "Discovery", 00:13:02.318 "listen_addresses": [], 00:13:02.318 "allow_any_host": true, 00:13:02.318 "hosts": [] 00:13:02.318 }, 00:13:02.318 { 00:13:02.318 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:02.318 "subtype": "NVMe", 00:13:02.318 "listen_addresses": [ 00:13:02.318 { 00:13:02.318 "trtype": "VFIOUSER", 00:13:02.318 "adrfam": "IPv4", 00:13:02.318 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:02.318 "trsvcid": "0" 00:13:02.318 } 00:13:02.318 ], 00:13:02.318 "allow_any_host": true, 00:13:02.318 "hosts": [], 00:13:02.318 "serial_number": "SPDK1", 00:13:02.318 "model_number": "SPDK bdev Controller", 00:13:02.318 "max_namespaces": 32, 00:13:02.318 "min_cntlid": 1, 00:13:02.318 "max_cntlid": 65519, 00:13:02.318 "namespaces": [ 00:13:02.318 { 00:13:02.318 "nsid": 1, 00:13:02.318 "bdev_name": "Malloc1", 00:13:02.318 "name": "Malloc1", 00:13:02.318 "nguid": "823FBF1FE7324283B29469205B01C824", 00:13:02.318 "uuid": "823fbf1f-e732-4283-b294-69205b01c824" 00:13:02.318 } 00:13:02.318 ] 00:13:02.318 }, 00:13:02.318 { 00:13:02.318 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:02.318 "subtype": "NVMe", 00:13:02.318 "listen_addresses": [ 00:13:02.318 { 00:13:02.318 "trtype": "VFIOUSER", 00:13:02.318 "adrfam": "IPv4", 00:13:02.318 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:02.318 "trsvcid": "0" 00:13:02.318 } 00:13:02.318 ], 00:13:02.318 "allow_any_host": true, 00:13:02.318 "hosts": [], 00:13:02.318 "serial_number": "SPDK2", 00:13:02.318 "model_number": "SPDK bdev Controller", 00:13:02.318 "max_namespaces": 32, 00:13:02.318 "min_cntlid": 1, 00:13:02.318 "max_cntlid": 65519, 00:13:02.318 "namespaces": [ 00:13:02.318 { 00:13:02.318 "nsid": 1, 00:13:02.318 "bdev_name": "Malloc2", 00:13:02.318 "name": "Malloc2", 00:13:02.318 "nguid": "C75818454DC04ACA8B5AF4E9D06DFD55", 00:13:02.318 "uuid": "c7581845-4dc0-4aca-8b5a-f4e9d06dfd55" 00:13:02.318 } 00:13:02.318 ] 00:13:02.318 } 00:13:02.318 ] 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=367143 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1259 -- # local i=0 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # return 0 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:02.579 Malloc3 00:13:02.579 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:02.579 [2024-07-15 23:49:17.702856] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.840 [2024-07-15 23:49:17.848811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.840 23:49:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:02.840 Asynchronous Event Request test 00:13:02.840 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.840 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.840 Registering asynchronous event callbacks... 00:13:02.840 Starting namespace attribute notice tests for all controllers... 00:13:02.840 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:02.840 aer_cb - Changed Namespace 00:13:02.840 Cleaning up... 00:13:02.840 [ 00:13:02.840 { 00:13:02.840 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:02.840 "subtype": "Discovery", 00:13:02.840 "listen_addresses": [], 00:13:02.840 "allow_any_host": true, 00:13:02.840 "hosts": [] 00:13:02.840 }, 00:13:02.840 { 00:13:02.840 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:02.840 "subtype": "NVMe", 00:13:02.840 "listen_addresses": [ 00:13:02.840 { 00:13:02.840 "trtype": "VFIOUSER", 00:13:02.840 "adrfam": "IPv4", 00:13:02.840 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:02.840 "trsvcid": "0" 00:13:02.840 } 00:13:02.840 ], 00:13:02.840 "allow_any_host": true, 00:13:02.840 "hosts": [], 00:13:02.840 "serial_number": "SPDK1", 00:13:02.840 "model_number": "SPDK bdev Controller", 00:13:02.840 "max_namespaces": 32, 00:13:02.840 "min_cntlid": 1, 00:13:02.840 "max_cntlid": 65519, 00:13:02.840 "namespaces": [ 00:13:02.840 { 00:13:02.840 "nsid": 1, 00:13:02.840 "bdev_name": "Malloc1", 00:13:02.840 "name": "Malloc1", 00:13:02.840 "nguid": "823FBF1FE7324283B29469205B01C824", 00:13:02.840 "uuid": "823fbf1f-e732-4283-b294-69205b01c824" 00:13:02.840 }, 00:13:02.840 { 00:13:02.840 "nsid": 2, 00:13:02.840 "bdev_name": "Malloc3", 00:13:02.840 "name": "Malloc3", 00:13:02.840 "nguid": "04DA85A4273342D5BC3192F9C46CD9D6", 00:13:02.840 "uuid": "04da85a4-2733-42d5-bc31-92f9c46cd9d6" 00:13:02.840 } 00:13:02.840 ] 00:13:02.840 }, 00:13:02.840 { 00:13:02.840 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:02.840 "subtype": "NVMe", 00:13:02.840 "listen_addresses": [ 00:13:02.840 { 00:13:02.840 "trtype": "VFIOUSER", 00:13:02.840 "adrfam": "IPv4", 00:13:02.840 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:02.840 "trsvcid": "0" 00:13:02.840 } 00:13:02.840 ], 00:13:02.840 "allow_any_host": true, 00:13:02.840 "hosts": [], 00:13:02.840 "serial_number": "SPDK2", 00:13:02.840 "model_number": "SPDK bdev Controller", 00:13:02.840 "max_namespaces": 32, 00:13:02.840 "min_cntlid": 1, 00:13:02.840 "max_cntlid": 65519, 00:13:02.840 "namespaces": [ 00:13:02.840 { 00:13:02.840 "nsid": 1, 00:13:02.840 "bdev_name": "Malloc2", 00:13:02.840 "name": "Malloc2", 00:13:02.840 "nguid": "C75818454DC04ACA8B5AF4E9D06DFD55", 00:13:02.840 "uuid": "c7581845-4dc0-4aca-8b5a-f4e9d06dfd55" 00:13:02.840 } 00:13:02.840 ] 00:13:02.840 } 00:13:02.840 ] 00:13:03.102 23:49:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 367143 00:13:03.102 23:49:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:03.102 23:49:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:03.102 23:49:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:03.102 23:49:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:03.102 [2024-07-15 23:49:18.069136] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:13:03.102 [2024-07-15 23:49:18.069195] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid367369 ] 00:13:03.102 [2024-07-15 23:49:18.098947] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:03.102 [2024-07-15 23:49:18.107464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:03.102 [2024-07-15 23:49:18.107485] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd2873b5000 00:13:03.102 [2024-07-15 23:49:18.108461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.102 [2024-07-15 23:49:18.109463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.102 [2024-07-15 23:49:18.110467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.103 [2024-07-15 23:49:18.111476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:03.103 [2024-07-15 23:49:18.112482] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:03.103 [2024-07-15 23:49:18.113487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.103 [2024-07-15 23:49:18.114493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:03.103 [2024-07-15 23:49:18.115499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:03.103 [2024-07-15 23:49:18.116505] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:03.103 [2024-07-15 23:49:18.116515] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd2873aa000 00:13:03.103 [2024-07-15 23:49:18.117841] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:03.103 [2024-07-15 23:49:18.138392] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:03.103 [2024-07-15 23:49:18.138416] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:03.103 [2024-07-15 23:49:18.140461] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:03.103 [2024-07-15 23:49:18.140511] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:03.103 [2024-07-15 23:49:18.140593] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:03.103 [2024-07-15 23:49:18.140607] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:03.103 [2024-07-15 23:49:18.140613] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:03.103 [2024-07-15 23:49:18.141464] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:03.103 [2024-07-15 23:49:18.141474] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:03.103 [2024-07-15 23:49:18.141481] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:03.103 [2024-07-15 23:49:18.142473] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:03.103 [2024-07-15 23:49:18.142488] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:03.103 [2024-07-15 23:49:18.142496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:03.103 [2024-07-15 23:49:18.143481] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:03.103 [2024-07-15 23:49:18.143490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:03.103 [2024-07-15 23:49:18.144484] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:03.103 [2024-07-15 23:49:18.144493] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:03.103 [2024-07-15 23:49:18.144498] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:03.103 [2024-07-15 23:49:18.144505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:03.103 [2024-07-15 23:49:18.144610] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:03.103 [2024-07-15 23:49:18.144615] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:03.103 [2024-07-15 23:49:18.144620] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:03.103 [2024-07-15 23:49:18.145489] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:03.103 [2024-07-15 23:49:18.146496] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:03.103 [2024-07-15 23:49:18.147507] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:03.103 [2024-07-15 23:49:18.148513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.103 [2024-07-15 23:49:18.148554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:03.103 [2024-07-15 23:49:18.149522] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:03.103 [2024-07-15 23:49:18.149531] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:03.103 [2024-07-15 23:49:18.149536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.149557] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:03.103 [2024-07-15 23:49:18.149564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.149577] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:03.103 [2024-07-15 23:49:18.149582] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.103 [2024-07-15 23:49:18.149594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.103 [2024-07-15 23:49:18.156239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:03.103 [2024-07-15 23:49:18.156252] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:03.103 [2024-07-15 23:49:18.156259] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:03.103 [2024-07-15 23:49:18.156264] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:03.103 [2024-07-15 23:49:18.156269] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:03.103 [2024-07-15 23:49:18.156273] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:03.103 [2024-07-15 23:49:18.156278] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:03.103 [2024-07-15 23:49:18.156283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.156290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.156300] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:03.103 [2024-07-15 23:49:18.164238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:03.103 [2024-07-15 23:49:18.164254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.103 [2024-07-15 23:49:18.164263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.103 [2024-07-15 23:49:18.164271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.103 [2024-07-15 23:49:18.164279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.103 [2024-07-15 23:49:18.164284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.164292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.164301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:03.103 [2024-07-15 23:49:18.172236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:03.103 [2024-07-15 23:49:18.172244] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:03.103 [2024-07-15 23:49:18.172250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.172256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.172262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.172270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:03.103 [2024-07-15 23:49:18.180236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:03.103 [2024-07-15 23:49:18.180302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.180310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.180318] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:03.103 [2024-07-15 23:49:18.180322] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:03.103 [2024-07-15 23:49:18.180329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:03.103 [2024-07-15 23:49:18.188238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:03.103 [2024-07-15 23:49:18.188249] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:03.103 [2024-07-15 23:49:18.188258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.188265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.188272] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:03.103 [2024-07-15 23:49:18.188276] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.103 [2024-07-15 23:49:18.188282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.103 [2024-07-15 23:49:18.196236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:03.103 [2024-07-15 23:49:18.196250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.196258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:03.103 [2024-07-15 23:49:18.196265] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:03.103 [2024-07-15 23:49:18.196270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.104 [2024-07-15 23:49:18.196276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.104 [2024-07-15 23:49:18.204238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:03.104 [2024-07-15 23:49:18.204248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:03.104 [2024-07-15 23:49:18.204255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:03.104 [2024-07-15 23:49:18.204262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:03.104 [2024-07-15 23:49:18.204268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:03.104 [2024-07-15 23:49:18.204273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:03.104 [2024-07-15 23:49:18.204278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:03.104 [2024-07-15 23:49:18.204283] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:03.104 [2024-07-15 23:49:18.204290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:03.104 [2024-07-15 23:49:18.204295] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:03.104 [2024-07-15 23:49:18.204311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:03.104 [2024-07-15 23:49:18.212237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:03.104 [2024-07-15 23:49:18.212251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:03.104 [2024-07-15 23:49:18.220237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:03.104 [2024-07-15 23:49:18.220251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:03.104 [2024-07-15 23:49:18.228236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:03.104 [2024-07-15 23:49:18.228249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:03.104 [2024-07-15 23:49:18.236237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:03.104 [2024-07-15 23:49:18.236255] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:03.104 [2024-07-15 23:49:18.236260] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:03.104 [2024-07-15 23:49:18.236264] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:03.104 [2024-07-15 23:49:18.236267] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:03.104 [2024-07-15 23:49:18.236274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:03.104 [2024-07-15 23:49:18.236282] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:03.104 [2024-07-15 23:49:18.236286] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:03.104 [2024-07-15 23:49:18.236292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:03.104 [2024-07-15 23:49:18.236299] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:03.104 [2024-07-15 23:49:18.236303] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:03.104 [2024-07-15 23:49:18.236309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:03.104 [2024-07-15 23:49:18.236317] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:03.104 [2024-07-15 23:49:18.236321] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:03.104 [2024-07-15 23:49:18.236327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:03.104 [2024-07-15 23:49:18.244239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:03.104 [2024-07-15 23:49:18.244255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:03.104 [2024-07-15 23:49:18.244265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:03.104 [2024-07-15 23:49:18.244274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:03.104 ===================================================== 00:13:03.104 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.104 ===================================================== 00:13:03.104 Controller Capabilities/Features 00:13:03.104 ================================ 00:13:03.104 Vendor ID: 4e58 00:13:03.104 Subsystem Vendor ID: 4e58 00:13:03.104 Serial Number: SPDK2 00:13:03.104 Model Number: SPDK bdev Controller 00:13:03.104 Firmware Version: 24.09 00:13:03.104 Recommended Arb Burst: 6 00:13:03.104 IEEE OUI Identifier: 8d 6b 50 00:13:03.104 Multi-path I/O 00:13:03.104 May have multiple subsystem ports: Yes 00:13:03.104 May have multiple controllers: Yes 00:13:03.104 Associated with SR-IOV VF: No 00:13:03.104 Max Data Transfer Size: 131072 00:13:03.104 Max Number of Namespaces: 32 00:13:03.104 Max Number of I/O Queues: 127 00:13:03.104 NVMe Specification Version (VS): 1.3 00:13:03.104 NVMe Specification Version (Identify): 1.3 00:13:03.104 Maximum Queue Entries: 256 00:13:03.104 Contiguous Queues Required: Yes 00:13:03.104 Arbitration Mechanisms Supported 00:13:03.104 Weighted Round Robin: Not Supported 00:13:03.104 Vendor Specific: Not Supported 00:13:03.104 Reset Timeout: 15000 ms 00:13:03.104 Doorbell Stride: 4 bytes 00:13:03.104 NVM Subsystem Reset: Not Supported 00:13:03.104 Command Sets Supported 00:13:03.104 NVM Command Set: Supported 00:13:03.104 Boot Partition: Not Supported 00:13:03.104 Memory Page Size Minimum: 4096 bytes 00:13:03.104 Memory Page Size Maximum: 4096 bytes 00:13:03.104 Persistent Memory Region: Not Supported 00:13:03.104 Optional Asynchronous Events Supported 00:13:03.104 Namespace Attribute Notices: Supported 00:13:03.104 Firmware Activation Notices: Not Supported 00:13:03.104 ANA Change Notices: Not Supported 00:13:03.104 PLE Aggregate Log Change Notices: Not Supported 00:13:03.104 LBA Status Info Alert Notices: Not Supported 00:13:03.104 EGE Aggregate Log Change Notices: Not Supported 00:13:03.104 Normal NVM Subsystem Shutdown event: Not Supported 00:13:03.104 Zone Descriptor Change Notices: Not Supported 00:13:03.104 Discovery Log Change Notices: Not Supported 00:13:03.104 Controller Attributes 00:13:03.104 128-bit Host Identifier: Supported 00:13:03.104 Non-Operational Permissive Mode: Not Supported 00:13:03.104 NVM Sets: Not Supported 00:13:03.104 Read Recovery Levels: Not Supported 00:13:03.104 Endurance Groups: Not Supported 00:13:03.104 Predictable Latency Mode: Not Supported 00:13:03.104 Traffic Based Keep ALive: Not Supported 00:13:03.104 Namespace Granularity: Not Supported 00:13:03.104 SQ Associations: Not Supported 00:13:03.104 UUID List: Not Supported 00:13:03.104 Multi-Domain Subsystem: Not Supported 00:13:03.104 Fixed Capacity Management: Not Supported 00:13:03.104 Variable Capacity Management: Not Supported 00:13:03.104 Delete Endurance Group: Not Supported 00:13:03.104 Delete NVM Set: Not Supported 00:13:03.104 Extended LBA Formats Supported: Not Supported 00:13:03.104 Flexible Data Placement Supported: Not Supported 00:13:03.104 00:13:03.104 Controller Memory Buffer Support 00:13:03.104 ================================ 00:13:03.104 Supported: No 00:13:03.104 00:13:03.104 Persistent Memory Region Support 00:13:03.104 ================================ 00:13:03.104 Supported: No 00:13:03.104 00:13:03.104 Admin Command Set Attributes 00:13:03.104 ============================ 00:13:03.104 Security Send/Receive: Not Supported 00:13:03.104 Format NVM: Not Supported 00:13:03.104 Firmware Activate/Download: Not Supported 00:13:03.104 Namespace Management: Not Supported 00:13:03.104 Device Self-Test: Not Supported 00:13:03.104 Directives: Not Supported 00:13:03.104 NVMe-MI: Not Supported 00:13:03.104 Virtualization Management: Not Supported 00:13:03.104 Doorbell Buffer Config: Not Supported 00:13:03.104 Get LBA Status Capability: Not Supported 00:13:03.104 Command & Feature Lockdown Capability: Not Supported 00:13:03.104 Abort Command Limit: 4 00:13:03.104 Async Event Request Limit: 4 00:13:03.104 Number of Firmware Slots: N/A 00:13:03.104 Firmware Slot 1 Read-Only: N/A 00:13:03.104 Firmware Activation Without Reset: N/A 00:13:03.104 Multiple Update Detection Support: N/A 00:13:03.104 Firmware Update Granularity: No Information Provided 00:13:03.104 Per-Namespace SMART Log: No 00:13:03.104 Asymmetric Namespace Access Log Page: Not Supported 00:13:03.104 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:03.104 Command Effects Log Page: Supported 00:13:03.104 Get Log Page Extended Data: Supported 00:13:03.104 Telemetry Log Pages: Not Supported 00:13:03.104 Persistent Event Log Pages: Not Supported 00:13:03.104 Supported Log Pages Log Page: May Support 00:13:03.104 Commands Supported & Effects Log Page: Not Supported 00:13:03.104 Feature Identifiers & Effects Log Page:May Support 00:13:03.104 NVMe-MI Commands & Effects Log Page: May Support 00:13:03.104 Data Area 4 for Telemetry Log: Not Supported 00:13:03.104 Error Log Page Entries Supported: 128 00:13:03.104 Keep Alive: Supported 00:13:03.104 Keep Alive Granularity: 10000 ms 00:13:03.104 00:13:03.104 NVM Command Set Attributes 00:13:03.104 ========================== 00:13:03.104 Submission Queue Entry Size 00:13:03.104 Max: 64 00:13:03.104 Min: 64 00:13:03.104 Completion Queue Entry Size 00:13:03.104 Max: 16 00:13:03.104 Min: 16 00:13:03.105 Number of Namespaces: 32 00:13:03.105 Compare Command: Supported 00:13:03.105 Write Uncorrectable Command: Not Supported 00:13:03.105 Dataset Management Command: Supported 00:13:03.105 Write Zeroes Command: Supported 00:13:03.105 Set Features Save Field: Not Supported 00:13:03.105 Reservations: Not Supported 00:13:03.105 Timestamp: Not Supported 00:13:03.105 Copy: Supported 00:13:03.105 Volatile Write Cache: Present 00:13:03.105 Atomic Write Unit (Normal): 1 00:13:03.105 Atomic Write Unit (PFail): 1 00:13:03.105 Atomic Compare & Write Unit: 1 00:13:03.105 Fused Compare & Write: Supported 00:13:03.105 Scatter-Gather List 00:13:03.105 SGL Command Set: Supported (Dword aligned) 00:13:03.105 SGL Keyed: Not Supported 00:13:03.105 SGL Bit Bucket Descriptor: Not Supported 00:13:03.105 SGL Metadata Pointer: Not Supported 00:13:03.105 Oversized SGL: Not Supported 00:13:03.105 SGL Metadata Address: Not Supported 00:13:03.105 SGL Offset: Not Supported 00:13:03.105 Transport SGL Data Block: Not Supported 00:13:03.105 Replay Protected Memory Block: Not Supported 00:13:03.105 00:13:03.105 Firmware Slot Information 00:13:03.105 ========================= 00:13:03.105 Active slot: 1 00:13:03.105 Slot 1 Firmware Revision: 24.09 00:13:03.105 00:13:03.105 00:13:03.105 Commands Supported and Effects 00:13:03.105 ============================== 00:13:03.105 Admin Commands 00:13:03.105 -------------- 00:13:03.105 Get Log Page (02h): Supported 00:13:03.105 Identify (06h): Supported 00:13:03.105 Abort (08h): Supported 00:13:03.105 Set Features (09h): Supported 00:13:03.105 Get Features (0Ah): Supported 00:13:03.105 Asynchronous Event Request (0Ch): Supported 00:13:03.105 Keep Alive (18h): Supported 00:13:03.105 I/O Commands 00:13:03.105 ------------ 00:13:03.105 Flush (00h): Supported LBA-Change 00:13:03.105 Write (01h): Supported LBA-Change 00:13:03.105 Read (02h): Supported 00:13:03.105 Compare (05h): Supported 00:13:03.105 Write Zeroes (08h): Supported LBA-Change 00:13:03.105 Dataset Management (09h): Supported LBA-Change 00:13:03.105 Copy (19h): Supported LBA-Change 00:13:03.105 00:13:03.105 Error Log 00:13:03.105 ========= 00:13:03.105 00:13:03.105 Arbitration 00:13:03.105 =========== 00:13:03.105 Arbitration Burst: 1 00:13:03.105 00:13:03.105 Power Management 00:13:03.105 ================ 00:13:03.105 Number of Power States: 1 00:13:03.105 Current Power State: Power State #0 00:13:03.105 Power State #0: 00:13:03.105 Max Power: 0.00 W 00:13:03.105 Non-Operational State: Operational 00:13:03.105 Entry Latency: Not Reported 00:13:03.105 Exit Latency: Not Reported 00:13:03.105 Relative Read Throughput: 0 00:13:03.105 Relative Read Latency: 0 00:13:03.105 Relative Write Throughput: 0 00:13:03.105 Relative Write Latency: 0 00:13:03.105 Idle Power: Not Reported 00:13:03.105 Active Power: Not Reported 00:13:03.105 Non-Operational Permissive Mode: Not Supported 00:13:03.105 00:13:03.105 Health Information 00:13:03.105 ================== 00:13:03.105 Critical Warnings: 00:13:03.105 Available Spare Space: OK 00:13:03.105 Temperature: OK 00:13:03.105 Device Reliability: OK 00:13:03.105 Read Only: No 00:13:03.105 Volatile Memory Backup: OK 00:13:03.105 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:03.105 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:03.105 Available Spare: 0% 00:13:03.105 Available Sp[2024-07-15 23:49:18.244375] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:03.105 [2024-07-15 23:49:18.252236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:03.105 [2024-07-15 23:49:18.252269] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:03.105 [2024-07-15 23:49:18.252279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.105 [2024-07-15 23:49:18.252286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.105 [2024-07-15 23:49:18.252292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.105 [2024-07-15 23:49:18.252299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.105 [2024-07-15 23:49:18.252350] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:03.105 [2024-07-15 23:49:18.252362] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:03.105 [2024-07-15 23:49:18.253356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.105 [2024-07-15 23:49:18.253403] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:03.105 [2024-07-15 23:49:18.253409] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:03.105 [2024-07-15 23:49:18.254356] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:03.105 [2024-07-15 23:49:18.254368] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:03.105 [2024-07-15 23:49:18.254418] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:03.105 [2024-07-15 23:49:18.257236] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:03.366 are Threshold: 0% 00:13:03.366 Life Percentage Used: 0% 00:13:03.366 Data Units Read: 0 00:13:03.366 Data Units Written: 0 00:13:03.366 Host Read Commands: 0 00:13:03.366 Host Write Commands: 0 00:13:03.366 Controller Busy Time: 0 minutes 00:13:03.366 Power Cycles: 0 00:13:03.366 Power On Hours: 0 hours 00:13:03.366 Unsafe Shutdowns: 0 00:13:03.366 Unrecoverable Media Errors: 0 00:13:03.366 Lifetime Error Log Entries: 0 00:13:03.366 Warning Temperature Time: 0 minutes 00:13:03.366 Critical Temperature Time: 0 minutes 00:13:03.366 00:13:03.366 Number of Queues 00:13:03.366 ================ 00:13:03.366 Number of I/O Submission Queues: 127 00:13:03.366 Number of I/O Completion Queues: 127 00:13:03.366 00:13:03.366 Active Namespaces 00:13:03.366 ================= 00:13:03.366 Namespace ID:1 00:13:03.366 Error Recovery Timeout: Unlimited 00:13:03.366 Command Set Identifier: NVM (00h) 00:13:03.366 Deallocate: Supported 00:13:03.366 Deallocated/Unwritten Error: Not Supported 00:13:03.366 Deallocated Read Value: Unknown 00:13:03.366 Deallocate in Write Zeroes: Not Supported 00:13:03.366 Deallocated Guard Field: 0xFFFF 00:13:03.366 Flush: Supported 00:13:03.366 Reservation: Supported 00:13:03.366 Namespace Sharing Capabilities: Multiple Controllers 00:13:03.366 Size (in LBAs): 131072 (0GiB) 00:13:03.366 Capacity (in LBAs): 131072 (0GiB) 00:13:03.366 Utilization (in LBAs): 131072 (0GiB) 00:13:03.366 NGUID: C75818454DC04ACA8B5AF4E9D06DFD55 00:13:03.366 UUID: c7581845-4dc0-4aca-8b5a-f4e9d06dfd55 00:13:03.366 Thin Provisioning: Not Supported 00:13:03.366 Per-NS Atomic Units: Yes 00:13:03.366 Atomic Boundary Size (Normal): 0 00:13:03.366 Atomic Boundary Size (PFail): 0 00:13:03.366 Atomic Boundary Offset: 0 00:13:03.366 Maximum Single Source Range Length: 65535 00:13:03.366 Maximum Copy Length: 65535 00:13:03.366 Maximum Source Range Count: 1 00:13:03.366 NGUID/EUI64 Never Reused: No 00:13:03.366 Namespace Write Protected: No 00:13:03.366 Number of LBA Formats: 1 00:13:03.366 Current LBA Format: LBA Format #00 00:13:03.366 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:03.366 00:13:03.366 23:49:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:03.366 [2024-07-15 23:49:18.448256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:08.654 Initializing NVMe Controllers 00:13:08.654 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:08.654 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:08.654 Initialization complete. Launching workers. 00:13:08.654 ======================================================== 00:13:08.654 Latency(us) 00:13:08.654 Device Information : IOPS MiB/s Average min max 00:13:08.654 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39964.60 156.11 3205.23 832.25 8792.39 00:13:08.654 ======================================================== 00:13:08.654 Total : 39964.60 156.11 3205.23 832.25 8792.39 00:13:08.654 00:13:08.654 [2024-07-15 23:49:23.556422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:08.654 23:49:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:08.654 [2024-07-15 23:49:23.735988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.938 Initializing NVMe Controllers 00:13:13.938 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:13.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:13.938 Initialization complete. Launching workers. 00:13:13.938 ======================================================== 00:13:13.938 Latency(us) 00:13:13.938 Device Information : IOPS MiB/s Average min max 00:13:13.938 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35161.84 137.35 3639.91 1113.45 8985.33 00:13:13.938 ======================================================== 00:13:13.938 Total : 35161.84 137.35 3639.91 1113.45 8985.33 00:13:13.938 00:13:13.938 [2024-07-15 23:49:28.757403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.938 23:49:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:13.938 [2024-07-15 23:49:28.949546] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:19.224 [2024-07-15 23:49:34.085308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:19.224 Initializing NVMe Controllers 00:13:19.224 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:19.224 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:19.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:19.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:19.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:19.224 Initialization complete. Launching workers. 00:13:19.224 Starting thread on core 2 00:13:19.224 Starting thread on core 3 00:13:19.224 Starting thread on core 1 00:13:19.224 23:49:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:19.224 [2024-07-15 23:49:34.353662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:22.625 [2024-07-15 23:49:37.411655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:22.625 Initializing NVMe Controllers 00:13:22.625 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.625 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.625 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:22.625 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:22.625 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:22.625 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:22.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:22.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:22.625 Initialization complete. Launching workers. 00:13:22.625 Starting thread on core 1 with urgent priority queue 00:13:22.625 Starting thread on core 2 with urgent priority queue 00:13:22.625 Starting thread on core 3 with urgent priority queue 00:13:22.625 Starting thread on core 0 with urgent priority queue 00:13:22.625 SPDK bdev Controller (SPDK2 ) core 0: 12436.67 IO/s 8.04 secs/100000 ios 00:13:22.625 SPDK bdev Controller (SPDK2 ) core 1: 12427.00 IO/s 8.05 secs/100000 ios 00:13:22.625 SPDK bdev Controller (SPDK2 ) core 2: 8993.33 IO/s 11.12 secs/100000 ios 00:13:22.625 SPDK bdev Controller (SPDK2 ) core 3: 9470.67 IO/s 10.56 secs/100000 ios 00:13:22.625 ======================================================== 00:13:22.625 00:13:22.625 23:49:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:22.625 [2024-07-15 23:49:37.682076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:22.625 Initializing NVMe Controllers 00:13:22.625 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.625 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:22.625 Namespace ID: 1 size: 0GB 00:13:22.625 Initialization complete. 00:13:22.625 INFO: using host memory buffer for IO 00:13:22.625 Hello world! 00:13:22.625 [2024-07-15 23:49:37.692144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:22.625 23:49:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:22.884 [2024-07-15 23:49:37.956223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.264 Initializing NVMe Controllers 00:13:24.264 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.264 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.264 Initialization complete. Launching workers. 00:13:24.264 submit (in ns) avg, min, max = 9051.0, 3885.0, 3999599.2 00:13:24.264 complete (in ns) avg, min, max = 17277.5, 2390.0, 3998925.8 00:13:24.264 00:13:24.264 Submit histogram 00:13:24.264 ================ 00:13:24.264 Range in us Cumulative Count 00:13:24.264 3.867 - 3.893: 0.1772% ( 34) 00:13:24.264 3.893 - 3.920: 1.5893% ( 271) 00:13:24.264 3.920 - 3.947: 6.0393% ( 854) 00:13:24.264 3.947 - 3.973: 14.6735% ( 1657) 00:13:24.264 3.973 - 4.000: 26.4655% ( 2263) 00:13:24.264 4.000 - 4.027: 37.2102% ( 2062) 00:13:24.264 4.027 - 4.053: 50.7790% ( 2604) 00:13:24.264 4.053 - 4.080: 67.7974% ( 3266) 00:13:24.264 4.080 - 4.107: 82.9556% ( 2909) 00:13:24.264 4.107 - 4.133: 91.9806% ( 1732) 00:13:24.264 4.133 - 4.160: 96.4306% ( 854) 00:13:24.264 4.160 - 4.187: 98.4732% ( 392) 00:13:24.264 4.187 - 4.213: 99.1819% ( 136) 00:13:24.264 4.213 - 4.240: 99.4372% ( 49) 00:13:24.264 4.240 - 4.267: 99.4685% ( 6) 00:13:24.264 4.267 - 4.293: 99.4893% ( 4) 00:13:24.264 4.373 - 4.400: 99.4946% ( 1) 00:13:24.264 4.480 - 4.507: 99.4998% ( 1) 00:13:24.264 4.533 - 4.560: 99.5050% ( 1) 00:13:24.264 4.693 - 4.720: 99.5102% ( 1) 00:13:24.264 4.720 - 4.747: 99.5154% ( 1) 00:13:24.264 4.800 - 4.827: 99.5206% ( 1) 00:13:24.264 5.040 - 5.067: 99.5258% ( 1) 00:13:24.264 5.280 - 5.307: 99.5362% ( 2) 00:13:24.264 5.653 - 5.680: 99.5415% ( 1) 00:13:24.264 5.707 - 5.733: 99.5467% ( 1) 00:13:24.264 5.733 - 5.760: 99.5519% ( 1) 00:13:24.264 5.787 - 5.813: 99.5571% ( 1) 00:13:24.264 5.840 - 5.867: 99.5623% ( 1) 00:13:24.264 5.867 - 5.893: 99.5727% ( 2) 00:13:24.264 5.973 - 6.000: 99.5779% ( 1) 00:13:24.264 6.000 - 6.027: 99.5831% ( 1) 00:13:24.264 6.027 - 6.053: 99.5936% ( 2) 00:13:24.264 6.053 - 6.080: 99.6144% ( 4) 00:13:24.264 6.107 - 6.133: 99.6248% ( 2) 00:13:24.264 6.133 - 6.160: 99.6300% ( 1) 00:13:24.264 6.160 - 6.187: 99.6509% ( 4) 00:13:24.264 6.213 - 6.240: 99.6561% ( 1) 00:13:24.264 6.240 - 6.267: 99.6613% ( 1) 00:13:24.264 6.320 - 6.347: 99.6665% ( 1) 00:13:24.264 6.373 - 6.400: 99.6717% ( 1) 00:13:24.264 6.480 - 6.507: 99.6821% ( 2) 00:13:24.264 6.533 - 6.560: 99.6874% ( 1) 00:13:24.264 6.613 - 6.640: 99.6978% ( 2) 00:13:24.264 6.640 - 6.667: 99.7030% ( 1) 00:13:24.264 6.667 - 6.693: 99.7082% ( 1) 00:13:24.264 6.693 - 6.720: 99.7186% ( 2) 00:13:24.264 6.720 - 6.747: 99.7343% ( 3) 00:13:24.264 6.747 - 6.773: 99.7395% ( 1) 00:13:24.264 6.773 - 6.800: 99.7447% ( 1) 00:13:24.264 6.827 - 6.880: 99.7499% ( 1) 00:13:24.264 6.880 - 6.933: 99.7603% ( 2) 00:13:24.264 6.933 - 6.987: 99.7759% ( 3) 00:13:24.264 7.093 - 7.147: 99.7811% ( 1) 00:13:24.264 7.147 - 7.200: 99.7916% ( 2) 00:13:24.264 7.200 - 7.253: 99.7968% ( 1) 00:13:24.264 7.253 - 7.307: 99.8020% ( 1) 00:13:24.264 7.307 - 7.360: 99.8072% ( 1) 00:13:24.264 7.360 - 7.413: 99.8228% ( 3) 00:13:24.264 7.413 - 7.467: 99.8333% ( 2) 00:13:24.264 7.467 - 7.520: 99.8437% ( 2) 00:13:24.264 7.520 - 7.573: 99.8489% ( 1) 00:13:24.264 7.787 - 7.840: 99.8541% ( 1) 00:13:24.264 8.107 - 8.160: 99.8645% ( 2) 00:13:24.264 8.587 - 8.640: 99.8697% ( 1) 00:13:24.264 14.507 - 14.613: 99.8749% ( 1) 00:13:24.264 3986.773 - 4014.080: 100.0000% ( 24) 00:13:24.264 00:13:24.264 Complete histogram 00:13:24.264 ================== 00:13:24.264 Range in us Cumulative Count 00:13:24.264 2.387 - 2.400: 0.0052% ( 1) 00:13:24.264 2.400 - 2.413: 0.0938% ( 17) 00:13:24.264 2.413 - 2.427: 0.9588% ( 166) 00:13:24.264 2.427 - 2.440: 1.0317% ( 14) 00:13:24.264 2.440 - 2.453: 1.1724% ( 27) 00:13:24.264 2.453 - 2.467: 38.5650% ( 7176) 00:13:24.264 2.467 - 2.480: 60.0281% ( 4119) 00:13:24.264 2.480 - 2.493: 71.7003% ( 2240) 00:13:24.264 2.493 - 2.507: 79.4226% ( 1482) 00:13:24.265 2.507 - 2.520: 81.8248% ( 461) 00:13:24.265 2.520 - [2024-07-15 23:49:39.053896] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.265 2.533: 84.1384% ( 444) 00:13:24.265 2.533 - 2.547: 89.2606% ( 983) 00:13:24.265 2.547 - 2.560: 94.3463% ( 976) 00:13:24.265 2.560 - 2.573: 97.0090% ( 511) 00:13:24.265 2.573 - 2.587: 98.4732% ( 281) 00:13:24.265 2.587 - 2.600: 99.1454% ( 129) 00:13:24.265 2.600 - 2.613: 99.3382% ( 37) 00:13:24.265 2.613 - 2.627: 99.3747% ( 7) 00:13:24.265 2.627 - 2.640: 99.3799% ( 1) 00:13:24.265 4.347 - 4.373: 99.3851% ( 1) 00:13:24.265 4.507 - 4.533: 99.3903% ( 1) 00:13:24.265 4.533 - 4.560: 99.3955% ( 1) 00:13:24.265 4.560 - 4.587: 99.4008% ( 1) 00:13:24.265 4.693 - 4.720: 99.4060% ( 1) 00:13:24.265 4.747 - 4.773: 99.4112% ( 1) 00:13:24.265 4.773 - 4.800: 99.4164% ( 1) 00:13:24.265 4.800 - 4.827: 99.4268% ( 2) 00:13:24.265 4.827 - 4.853: 99.4320% ( 1) 00:13:24.265 4.907 - 4.933: 99.4372% ( 1) 00:13:24.265 4.933 - 4.960: 99.4424% ( 1) 00:13:24.265 4.960 - 4.987: 99.4477% ( 1) 00:13:24.265 5.040 - 5.067: 99.4529% ( 1) 00:13:24.265 5.093 - 5.120: 99.4633% ( 2) 00:13:24.265 5.120 - 5.147: 99.4685% ( 1) 00:13:24.265 5.147 - 5.173: 99.4737% ( 1) 00:13:24.265 5.173 - 5.200: 99.4893% ( 3) 00:13:24.265 5.280 - 5.307: 99.4946% ( 1) 00:13:24.265 5.307 - 5.333: 99.5050% ( 2) 00:13:24.265 5.360 - 5.387: 99.5154% ( 2) 00:13:24.265 5.387 - 5.413: 99.5258% ( 2) 00:13:24.265 5.440 - 5.467: 99.5362% ( 2) 00:13:24.265 5.493 - 5.520: 99.5415% ( 1) 00:13:24.265 5.520 - 5.547: 99.5519% ( 2) 00:13:24.265 5.573 - 5.600: 99.5571% ( 1) 00:13:24.265 5.600 - 5.627: 99.5623% ( 1) 00:13:24.265 5.627 - 5.653: 99.5675% ( 1) 00:13:24.265 6.160 - 6.187: 99.5779% ( 2) 00:13:24.265 6.187 - 6.213: 99.5831% ( 1) 00:13:24.265 6.213 - 6.240: 99.5883% ( 1) 00:13:24.265 6.587 - 6.613: 99.5936% ( 1) 00:13:24.265 6.640 - 6.667: 99.5988% ( 1) 00:13:24.265 6.933 - 6.987: 99.6040% ( 1) 00:13:24.265 12.853 - 12.907: 99.6092% ( 1) 00:13:24.265 13.653 - 13.760: 99.6144% ( 1) 00:13:24.265 15.253 - 15.360: 99.6196% ( 1) 00:13:24.265 15.680 - 15.787: 99.6248% ( 1) 00:13:24.265 42.667 - 42.880: 99.6300% ( 1) 00:13:24.265 3986.773 - 4014.080: 100.0000% ( 71) 00:13:24.265 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:24.265 [ 00:13:24.265 { 00:13:24.265 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:24.265 "subtype": "Discovery", 00:13:24.265 "listen_addresses": [], 00:13:24.265 "allow_any_host": true, 00:13:24.265 "hosts": [] 00:13:24.265 }, 00:13:24.265 { 00:13:24.265 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:24.265 "subtype": "NVMe", 00:13:24.265 "listen_addresses": [ 00:13:24.265 { 00:13:24.265 "trtype": "VFIOUSER", 00:13:24.265 "adrfam": "IPv4", 00:13:24.265 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:24.265 "trsvcid": "0" 00:13:24.265 } 00:13:24.265 ], 00:13:24.265 "allow_any_host": true, 00:13:24.265 "hosts": [], 00:13:24.265 "serial_number": "SPDK1", 00:13:24.265 "model_number": "SPDK bdev Controller", 00:13:24.265 "max_namespaces": 32, 00:13:24.265 "min_cntlid": 1, 00:13:24.265 "max_cntlid": 65519, 00:13:24.265 "namespaces": [ 00:13:24.265 { 00:13:24.265 "nsid": 1, 00:13:24.265 "bdev_name": "Malloc1", 00:13:24.265 "name": "Malloc1", 00:13:24.265 "nguid": "823FBF1FE7324283B29469205B01C824", 00:13:24.265 "uuid": "823fbf1f-e732-4283-b294-69205b01c824" 00:13:24.265 }, 00:13:24.265 { 00:13:24.265 "nsid": 2, 00:13:24.265 "bdev_name": "Malloc3", 00:13:24.265 "name": "Malloc3", 00:13:24.265 "nguid": "04DA85A4273342D5BC3192F9C46CD9D6", 00:13:24.265 "uuid": "04da85a4-2733-42d5-bc31-92f9c46cd9d6" 00:13:24.265 } 00:13:24.265 ] 00:13:24.265 }, 00:13:24.265 { 00:13:24.265 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:24.265 "subtype": "NVMe", 00:13:24.265 "listen_addresses": [ 00:13:24.265 { 00:13:24.265 "trtype": "VFIOUSER", 00:13:24.265 "adrfam": "IPv4", 00:13:24.265 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:24.265 "trsvcid": "0" 00:13:24.265 } 00:13:24.265 ], 00:13:24.265 "allow_any_host": true, 00:13:24.265 "hosts": [], 00:13:24.265 "serial_number": "SPDK2", 00:13:24.265 "model_number": "SPDK bdev Controller", 00:13:24.265 "max_namespaces": 32, 00:13:24.265 "min_cntlid": 1, 00:13:24.265 "max_cntlid": 65519, 00:13:24.265 "namespaces": [ 00:13:24.265 { 00:13:24.265 "nsid": 1, 00:13:24.265 "bdev_name": "Malloc2", 00:13:24.265 "name": "Malloc2", 00:13:24.265 "nguid": "C75818454DC04ACA8B5AF4E9D06DFD55", 00:13:24.265 "uuid": "c7581845-4dc0-4aca-8b5a-f4e9d06dfd55" 00:13:24.265 } 00:13:24.265 ] 00:13:24.265 } 00:13:24.265 ] 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=371504 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1259 -- # local i=0 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # return 0 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:24.265 Malloc4 00:13:24.265 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:24.265 [2024-07-15 23:49:39.446895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.525 [2024-07-15 23:49:39.570702] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.525 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:24.525 Asynchronous Event Request test 00:13:24.525 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.525 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.525 Registering asynchronous event callbacks... 00:13:24.525 Starting namespace attribute notice tests for all controllers... 00:13:24.525 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:24.525 aer_cb - Changed Namespace 00:13:24.525 Cleaning up... 00:13:24.786 [ 00:13:24.786 { 00:13:24.786 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:24.786 "subtype": "Discovery", 00:13:24.786 "listen_addresses": [], 00:13:24.786 "allow_any_host": true, 00:13:24.786 "hosts": [] 00:13:24.786 }, 00:13:24.786 { 00:13:24.786 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:24.786 "subtype": "NVMe", 00:13:24.786 "listen_addresses": [ 00:13:24.786 { 00:13:24.786 "trtype": "VFIOUSER", 00:13:24.786 "adrfam": "IPv4", 00:13:24.786 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:24.786 "trsvcid": "0" 00:13:24.786 } 00:13:24.786 ], 00:13:24.786 "allow_any_host": true, 00:13:24.786 "hosts": [], 00:13:24.786 "serial_number": "SPDK1", 00:13:24.786 "model_number": "SPDK bdev Controller", 00:13:24.786 "max_namespaces": 32, 00:13:24.786 "min_cntlid": 1, 00:13:24.786 "max_cntlid": 65519, 00:13:24.786 "namespaces": [ 00:13:24.786 { 00:13:24.786 "nsid": 1, 00:13:24.786 "bdev_name": "Malloc1", 00:13:24.786 "name": "Malloc1", 00:13:24.786 "nguid": "823FBF1FE7324283B29469205B01C824", 00:13:24.786 "uuid": "823fbf1f-e732-4283-b294-69205b01c824" 00:13:24.786 }, 00:13:24.786 { 00:13:24.786 "nsid": 2, 00:13:24.786 "bdev_name": "Malloc3", 00:13:24.786 "name": "Malloc3", 00:13:24.786 "nguid": "04DA85A4273342D5BC3192F9C46CD9D6", 00:13:24.786 "uuid": "04da85a4-2733-42d5-bc31-92f9c46cd9d6" 00:13:24.786 } 00:13:24.786 ] 00:13:24.786 }, 00:13:24.786 { 00:13:24.786 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:24.786 "subtype": "NVMe", 00:13:24.786 "listen_addresses": [ 00:13:24.786 { 00:13:24.786 "trtype": "VFIOUSER", 00:13:24.786 "adrfam": "IPv4", 00:13:24.786 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:24.786 "trsvcid": "0" 00:13:24.786 } 00:13:24.786 ], 00:13:24.786 "allow_any_host": true, 00:13:24.786 "hosts": [], 00:13:24.786 "serial_number": "SPDK2", 00:13:24.786 "model_number": "SPDK bdev Controller", 00:13:24.786 "max_namespaces": 32, 00:13:24.786 "min_cntlid": 1, 00:13:24.786 "max_cntlid": 65519, 00:13:24.786 "namespaces": [ 00:13:24.786 { 00:13:24.786 "nsid": 1, 00:13:24.786 "bdev_name": "Malloc2", 00:13:24.786 "name": "Malloc2", 00:13:24.786 "nguid": "C75818454DC04ACA8B5AF4E9D06DFD55", 00:13:24.786 "uuid": "c7581845-4dc0-4aca-8b5a-f4e9d06dfd55" 00:13:24.786 }, 00:13:24.786 { 00:13:24.786 "nsid": 2, 00:13:24.786 "bdev_name": "Malloc4", 00:13:24.786 "name": "Malloc4", 00:13:24.786 "nguid": "79DF8301535C4B039DC6CA702B281C07", 00:13:24.786 "uuid": "79df8301-535c-4b03-9dc6-ca702b281c07" 00:13:24.786 } 00:13:24.786 ] 00:13:24.786 } 00:13:24.786 ] 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 371504 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 362382 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@942 -- # '[' -z 362382 ']' 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # kill -0 362382 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # uname 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 362382 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@960 -- # echo 'killing process with pid 362382' 00:13:24.786 killing process with pid 362382 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@961 -- # kill 362382 00:13:24.786 23:49:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # wait 362382 00:13:25.047 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:25.047 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:25.047 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:25.047 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:25.047 23:49:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=371622 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 371622' 00:13:25.047 Process pid: 371622 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 371622 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@823 -- # '[' -z 371622 ']' 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:25.047 23:49:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:25.047 [2024-07-15 23:49:40.049602] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:25.047 [2024-07-15 23:49:40.050325] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:13:25.047 [2024-07-15 23:49:40.050364] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.047 [2024-07-15 23:49:40.110644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.047 [2024-07-15 23:49:40.176682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.047 [2024-07-15 23:49:40.176721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.047 [2024-07-15 23:49:40.176730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.047 [2024-07-15 23:49:40.176740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.047 [2024-07-15 23:49:40.176746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.047 [2024-07-15 23:49:40.176883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.047 [2024-07-15 23:49:40.177022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.047 [2024-07-15 23:49:40.177202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.047 [2024-07-15 23:49:40.177204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.307 [2024-07-15 23:49:40.241487] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:25.307 [2024-07-15 23:49:40.241491] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:25.307 [2024-07-15 23:49:40.242526] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:25.307 [2024-07-15 23:49:40.242853] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:25.307 [2024-07-15 23:49:40.242944] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:25.878 23:49:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:25.878 23:49:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # return 0 00:13:25.878 23:49:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:26.820 23:49:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:26.820 23:49:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:26.820 23:49:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:26.820 23:49:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.820 23:49:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:26.820 23:49:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:27.080 Malloc1 00:13:27.080 23:49:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:27.340 23:49:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:27.340 23:49:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:27.600 23:49:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.600 23:49:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:27.600 23:49:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:27.860 Malloc2 00:13:27.860 23:49:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:27.860 23:49:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:28.121 23:49:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 371622 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@942 -- # '[' -z 371622 ']' 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # kill -0 371622 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # uname 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 371622 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@960 -- # echo 'killing process with pid 371622' 00:13:28.381 killing process with pid 371622 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@961 -- # kill 371622 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # wait 371622 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:28.381 00:13:28.381 real 0m51.153s 00:13:28.381 user 3m22.779s 00:13:28.381 sys 0m3.039s 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:28.381 23:49:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:28.381 ************************************ 00:13:28.381 END TEST nvmf_vfio_user 00:13:28.381 ************************************ 00:13:28.642 23:49:43 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:28.642 23:49:43 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:28.642 23:49:43 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:28.642 23:49:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:28.642 23:49:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.642 ************************************ 00:13:28.642 START TEST nvmf_vfio_user_nvme_compliance 00:13:28.642 ************************************ 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:28.642 * Looking for test storage... 00:13:28.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=372544 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 372544' 00:13:28.642 Process pid: 372544 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 372544 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@823 -- # '[' -z 372544 ']' 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:28.642 23:49:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:28.642 [2024-07-15 23:49:43.791529] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:13:28.642 [2024-07-15 23:49:43.791592] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.903 [2024-07-15 23:49:43.861710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.903 [2024-07-15 23:49:43.933659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.903 [2024-07-15 23:49:43.933701] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.903 [2024-07-15 23:49:43.933708] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.903 [2024-07-15 23:49:43.933714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.903 [2024-07-15 23:49:43.933720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.903 [2024-07-15 23:49:43.933857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.903 [2024-07-15 23:49:43.933985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.903 [2024-07-15 23:49:43.933987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.474 23:49:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:29.474 23:49:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # return 0 00:13:29.474 23:49:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:30.417 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:30.678 malloc0 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:30.678 23:49:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:30.678 00:13:30.678 00:13:30.678 CUnit - A unit testing framework for C - Version 2.1-3 00:13:30.678 http://cunit.sourceforge.net/ 00:13:30.678 00:13:30.678 00:13:30.678 Suite: nvme_compliance 00:13:30.678 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 23:49:45.838678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.678 [2024-07-15 23:49:45.840003] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:30.678 [2024-07-15 23:49:45.840013] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:30.678 [2024-07-15 23:49:45.840017] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:30.678 [2024-07-15 23:49:45.841692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.940 passed 00:13:30.940 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 23:49:45.935246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.940 [2024-07-15 23:49:45.938262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:30.940 passed 00:13:30.940 Test: admin_identify_ns ...[2024-07-15 23:49:46.033464] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:30.940 [2024-07-15 23:49:46.097242] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:30.940 [2024-07-15 23:49:46.105239] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:30.940 [2024-07-15 23:49:46.126351] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.201 passed 00:13:31.201 Test: admin_get_features_mandatory_features ...[2024-07-15 23:49:46.217457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.201 [2024-07-15 23:49:46.220481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.201 passed 00:13:31.201 Test: admin_get_features_optional_features ...[2024-07-15 23:49:46.314026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.201 [2024-07-15 23:49:46.317043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.201 passed 00:13:31.461 Test: admin_set_features_number_of_queues ...[2024-07-15 23:49:46.410161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.461 [2024-07-15 23:49:46.516329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.461 passed 00:13:31.461 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 23:49:46.607952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.461 [2024-07-15 23:49:46.610969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.461 passed 00:13:31.722 Test: admin_get_log_page_with_lpo ...[2024-07-15 23:49:46.702473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.722 [2024-07-15 23:49:46.770243] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:31.722 [2024-07-15 23:49:46.783298] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.722 passed 00:13:31.722 Test: fabric_property_get ...[2024-07-15 23:49:46.877331] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.722 [2024-07-15 23:49:46.878574] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:31.722 [2024-07-15 23:49:46.880358] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.983 passed 00:13:31.983 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 23:49:46.973901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.983 [2024-07-15 23:49:46.975156] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:31.983 [2024-07-15 23:49:46.977931] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:31.983 passed 00:13:31.983 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 23:49:47.070089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:31.983 [2024-07-15 23:49:47.154238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:31.983 [2024-07-15 23:49:47.170245] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:32.244 [2024-07-15 23:49:47.175332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.244 passed 00:13:32.244 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 23:49:47.266937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.244 [2024-07-15 23:49:47.268179] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:32.244 [2024-07-15 23:49:47.269953] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.244 passed 00:13:32.244 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 23:49:47.361038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.505 [2024-07-15 23:49:47.440242] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:32.505 [2024-07-15 23:49:47.464237] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:32.505 [2024-07-15 23:49:47.469314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.505 passed 00:13:32.505 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 23:49:47.560909] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.505 [2024-07-15 23:49:47.562159] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:32.505 [2024-07-15 23:49:47.562178] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:32.505 [2024-07-15 23:49:47.563934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.505 passed 00:13:32.505 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 23:49:47.658006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.766 [2024-07-15 23:49:47.749238] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:32.766 [2024-07-15 23:49:47.757234] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:32.766 [2024-07-15 23:49:47.765239] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:32.766 [2024-07-15 23:49:47.773233] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:32.766 [2024-07-15 23:49:47.802324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:32.766 passed 00:13:32.766 Test: admin_create_io_sq_verify_pc ...[2024-07-15 23:49:47.893902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:32.766 [2024-07-15 23:49:47.910244] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:32.766 [2024-07-15 23:49:47.928058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.027 passed 00:13:33.027 Test: admin_create_io_qp_max_qps ...[2024-07-15 23:49:48.021577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.971 [2024-07-15 23:49:49.138242] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:34.541 [2024-07-15 23:49:49.515621] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.541 passed 00:13:34.542 Test: admin_create_io_sq_shared_cq ...[2024-07-15 23:49:49.607467] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.802 [2024-07-15 23:49:49.739246] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:34.802 [2024-07-15 23:49:49.776294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.802 passed 00:13:34.802 00:13:34.802 Run Summary: Type Total Ran Passed Failed Inactive 00:13:34.802 suites 1 1 n/a 0 0 00:13:34.802 tests 18 18 18 0 0 00:13:34.802 asserts 360 360 360 0 n/a 00:13:34.802 00:13:34.802 Elapsed time = 1.654 seconds 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 372544 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@942 -- # '[' -z 372544 ']' 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # kill -0 372544 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # uname 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 372544 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # echo 'killing process with pid 372544' 00:13:34.802 killing process with pid 372544 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@961 -- # kill 372544 00:13:34.802 23:49:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # wait 372544 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:35.064 00:13:35.064 real 0m6.419s 00:13:35.064 user 0m18.381s 00:13:35.064 sys 0m0.462s 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:35.064 ************************************ 00:13:35.064 END TEST nvmf_vfio_user_nvme_compliance 00:13:35.064 ************************************ 00:13:35.064 23:49:50 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:35.064 23:49:50 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:35.064 23:49:50 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:35.064 23:49:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:35.064 23:49:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.064 ************************************ 00:13:35.064 START TEST nvmf_vfio_user_fuzz 00:13:35.064 ************************************ 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:35.064 * Looking for test storage... 00:13:35.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=373753 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 373753' 00:13:35.064 Process pid: 373753 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 373753 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@823 -- # '[' -z 373753 ']' 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:35.064 23:49:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.008 23:49:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:36.008 23:49:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # return 0 00:13:36.008 23:49:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.949 malloc0 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:36.949 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:37.210 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:37.210 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:37.210 23:49:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:09.332 Fuzzing completed. Shutting down the fuzz application 00:14:09.332 00:14:09.333 Dumping successful admin opcodes: 00:14:09.333 8, 9, 10, 24, 00:14:09.333 Dumping successful io opcodes: 00:14:09.333 0, 00:14:09.333 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1128871, total successful commands: 4444, random_seed: 1399846400 00:14:09.333 NS: 0x200003a1ef00 admin qp, Total commands completed: 142056, total successful commands: 1151, random_seed: 2317683456 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 373753 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@942 -- # '[' -z 373753 ']' 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # kill -0 373753 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # uname 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 373753 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # echo 'killing process with pid 373753' 00:14:09.333 killing process with pid 373753 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@961 -- # kill 373753 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # wait 373753 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:09.333 00:14:09.333 real 0m33.679s 00:14:09.333 user 0m37.979s 00:14:09.333 sys 0m25.778s 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:09.333 23:50:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:09.333 ************************************ 00:14:09.333 END TEST nvmf_vfio_user_fuzz 00:14:09.333 ************************************ 00:14:09.333 23:50:23 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:09.333 23:50:23 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:09.333 23:50:23 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:09.333 23:50:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:09.333 23:50:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.333 ************************************ 00:14:09.333 START TEST nvmf_host_management 00:14:09.333 ************************************ 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:09.333 * Looking for test storage... 00:14:09.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.333 23:50:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.476 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:17.477 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:17.477 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:17.477 Found net devices under 0000:31:00.0: cvl_0_0 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:17.477 Found net devices under 0000:31:00.1: cvl_0_1 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:14:17.477 00:14:17.477 --- 10.0.0.2 ping statistics --- 00:14:17.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.477 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:14:17.477 00:14:17.477 --- 10.0.0.1 ping statistics --- 00:14:17.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.477 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=384665 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 384665 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 384665 ']' 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:17.477 23:50:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.477 [2024-07-15 23:50:31.965106] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:14:17.477 [2024-07-15 23:50:31.965155] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.477 [2024-07-15 23:50:32.051753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.477 [2024-07-15 23:50:32.107725] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.477 [2024-07-15 23:50:32.107761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.477 [2024-07-15 23:50:32.107767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.477 [2024-07-15 23:50:32.107771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.477 [2024-07-15 23:50:32.107775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.477 [2024-07-15 23:50:32.107882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.477 [2024-07-15 23:50:32.108039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.477 [2024-07-15 23:50:32.108193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.478 [2024-07-15 23:50:32.108195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.739 [2024-07-15 23:50:32.785498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.739 Malloc0 00:14:17.739 [2024-07-15 23:50:32.843804] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=384760 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 384760 /var/tmp/bdevperf.sock 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 384760 ']' 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:17.739 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:17.740 { 00:14:17.740 "params": { 00:14:17.740 "name": "Nvme$subsystem", 00:14:17.740 "trtype": "$TEST_TRANSPORT", 00:14:17.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:17.740 "adrfam": "ipv4", 00:14:17.740 "trsvcid": "$NVMF_PORT", 00:14:17.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:17.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:17.740 "hdgst": ${hdgst:-false}, 00:14:17.740 "ddgst": ${ddgst:-false} 00:14:17.740 }, 00:14:17.740 "method": "bdev_nvme_attach_controller" 00:14:17.740 } 00:14:17.740 EOF 00:14:17.740 )") 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:17.740 23:50:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:17.740 "params": { 00:14:17.740 "name": "Nvme0", 00:14:17.740 "trtype": "tcp", 00:14:17.740 "traddr": "10.0.0.2", 00:14:17.740 "adrfam": "ipv4", 00:14:17.740 "trsvcid": "4420", 00:14:17.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:17.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:17.740 "hdgst": false, 00:14:17.740 "ddgst": false 00:14:17.740 }, 00:14:17.740 "method": "bdev_nvme_attach_controller" 00:14:17.740 }' 00:14:18.001 [2024-07-15 23:50:32.944034] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:14:18.001 [2024-07-15 23:50:32.944083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384760 ] 00:14:18.001 [2024-07-15 23:50:33.010238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.001 [2024-07-15 23:50:33.075108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.262 Running I/O for 10 seconds... 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=583 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 583 -ge 100 ']' 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:18.835 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.835 [2024-07-15 23:50:33.782668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.782821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5e20 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.786634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.835 [2024-07-15 23:50:33.786676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.786687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.835 [2024-07-15 23:50:33.786695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.786703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.835 [2024-07-15 23:50:33.786711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.786718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.835 [2024-07-15 23:50:33.786726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.786733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5540 is same with the state(5) to be set 00:14:18.835 [2024-07-15 23:50:33.787009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.835 [2024-07-15 23:50:33.787222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.835 [2024-07-15 23:50:33.787238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:18.836 [2024-07-15 23:50:33.787722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.787980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.787989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.788000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.788009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.788017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.788026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.788034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.788044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.788050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.788059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.788067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.788076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.788084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.788092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.788099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:18.836 [2024-07-15 23:50:33.788108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.836 [2024-07-15 23:50:33.788118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.836 [2024-07-15 23:50:33.788175] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xef6850 was disconnected and freed. reset controller. 00:14:18.836 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:18.836 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.836 [2024-07-15 23:50:33.789359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:18.836 task offset: 83200 on job bdev=Nvme0n1 fails 00:14:18.836 00:14:18.836 Latency(us) 00:14:18.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.836 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:18.836 Job: Nvme0n1 ended in about 0.41 seconds with error 00:14:18.836 Verification LBA range: start 0x0 length 0x400 00:14:18.836 Nvme0n1 : 0.41 1575.99 98.50 156.38 0.00 35779.64 1966.08 36481.71 00:14:18.836 =================================================================================================================== 00:14:18.837 Total : 1575.99 98.50 156.38 0.00 35779.64 1966.08 36481.71 00:14:18.837 [2024-07-15 23:50:33.791339] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:18.837 [2024-07-15 23:50:33.791364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae5540 (9): Bad file descriptor 00:14:18.837 23:50:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:18.837 23:50:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:18.837 [2024-07-15 23:50:33.841694] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 384760 00:14:19.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (384760) - No such process 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:19.780 { 00:14:19.780 "params": { 00:14:19.780 "name": "Nvme$subsystem", 00:14:19.780 "trtype": "$TEST_TRANSPORT", 00:14:19.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:19.780 "adrfam": "ipv4", 00:14:19.780 "trsvcid": "$NVMF_PORT", 00:14:19.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:19.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:19.780 "hdgst": ${hdgst:-false}, 00:14:19.780 "ddgst": ${ddgst:-false} 00:14:19.780 }, 00:14:19.780 "method": "bdev_nvme_attach_controller" 00:14:19.780 } 00:14:19.780 EOF 00:14:19.780 )") 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:19.780 23:50:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:19.780 "params": { 00:14:19.780 "name": "Nvme0", 00:14:19.780 "trtype": "tcp", 00:14:19.780 "traddr": "10.0.0.2", 00:14:19.780 "adrfam": "ipv4", 00:14:19.780 "trsvcid": "4420", 00:14:19.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:19.780 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:19.780 "hdgst": false, 00:14:19.780 "ddgst": false 00:14:19.780 }, 00:14:19.780 "method": "bdev_nvme_attach_controller" 00:14:19.780 }' 00:14:19.780 [2024-07-15 23:50:34.864912] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:14:19.780 [2024-07-15 23:50:34.864965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385218 ] 00:14:19.780 [2024-07-15 23:50:34.929955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.041 [2024-07-15 23:50:34.994798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.301 Running I/O for 1 seconds... 00:14:21.243 00:14:21.243 Latency(us) 00:14:21.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.243 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:21.243 Verification LBA range: start 0x0 length 0x400 00:14:21.243 Nvme0n1 : 1.03 1503.44 93.97 0.00 0.00 41830.59 1665.71 35607.89 00:14:21.243 =================================================================================================================== 00:14:21.243 Total : 1503.44 93.97 0.00 0.00 41830.59 1665.71 35607.89 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:21.503 rmmod nvme_tcp 00:14:21.503 rmmod nvme_fabrics 00:14:21.503 rmmod nvme_keyring 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 384665 ']' 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 384665 00:14:21.503 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@942 -- # '[' -z 384665 ']' 00:14:21.504 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # kill -0 384665 00:14:21.504 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # uname 00:14:21.504 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:21.504 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 384665 00:14:21.504 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:14:21.504 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:14:21.504 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@960 -- # echo 'killing process with pid 384665' 00:14:21.504 killing process with pid 384665 00:14:21.504 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@961 -- # kill 384665 00:14:21.504 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # wait 384665 00:14:21.765 [2024-07-15 23:50:36.724458] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:21.765 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:21.765 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:21.765 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:21.765 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.765 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:21.765 23:50:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.765 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.765 23:50:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.674 23:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:23.674 23:50:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:23.674 00:14:23.674 real 0m14.962s 00:14:23.674 user 0m23.356s 00:14:23.674 sys 0m6.831s 00:14:23.674 23:50:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:23.674 23:50:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:23.674 ************************************ 00:14:23.674 END TEST nvmf_host_management 00:14:23.674 ************************************ 00:14:23.674 23:50:38 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:23.674 23:50:38 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:23.674 23:50:38 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:23.674 23:50:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:23.674 23:50:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.972 ************************************ 00:14:23.972 START TEST nvmf_lvol 00:14:23.972 ************************************ 00:14:23.972 23:50:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:23.972 * Looking for test storage... 00:14:23.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.972 23:50:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.972 23:50:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:32.185 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:32.185 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:32.185 Found net devices under 0000:31:00.0: cvl_0_0 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.185 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:32.185 Found net devices under 0000:31:00.1: cvl_0_1 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:32.186 23:50:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:32.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:14:32.186 00:14:32.186 --- 10.0.0.2 ping statistics --- 00:14:32.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.186 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:14:32.186 00:14:32.186 --- 10.0.0.1 ping statistics --- 00:14:32.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.186 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=390283 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 390283 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@823 -- # '[' -z 390283 ']' 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:32.186 23:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:32.186 [2024-07-15 23:50:47.220861] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:14:32.186 [2024-07-15 23:50:47.220927] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.186 [2024-07-15 23:50:47.300854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:32.447 [2024-07-15 23:50:47.375175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.447 [2024-07-15 23:50:47.375218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.447 [2024-07-15 23:50:47.375226] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.447 [2024-07-15 23:50:47.375238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.447 [2024-07-15 23:50:47.375244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.447 [2024-07-15 23:50:47.375316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.447 [2024-07-15 23:50:47.375435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.447 [2024-07-15 23:50:47.375437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.019 23:50:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:33.019 23:50:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # return 0 00:14:33.019 23:50:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.019 23:50:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.019 23:50:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:33.019 23:50:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.019 23:50:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:33.019 [2024-07-15 23:50:48.188238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.280 23:50:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.280 23:50:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:33.280 23:50:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.541 23:50:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:33.541 23:50:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:33.803 23:50:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:33.803 23:50:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=479a8aa5-c19b-4969-a0b9-365395ba9c2f 00:14:33.803 23:50:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 479a8aa5-c19b-4969-a0b9-365395ba9c2f lvol 20 00:14:34.065 23:50:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a764f534-c28c-470b-9b0b-0aaa005f5ee4 00:14:34.065 23:50:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:34.065 23:50:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a764f534-c28c-470b-9b0b-0aaa005f5ee4 00:14:34.326 23:50:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:34.588 [2024-07-15 23:50:49.537830] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.588 23:50:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.588 23:50:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=390793 00:14:34.588 23:50:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:34.588 23:50:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:35.969 23:50:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a764f534-c28c-470b-9b0b-0aaa005f5ee4 MY_SNAPSHOT 00:14:35.969 23:50:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f8a17daf-25a1-4990-9853-a0c3f3a95608 00:14:35.969 23:50:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a764f534-c28c-470b-9b0b-0aaa005f5ee4 30 00:14:36.230 23:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f8a17daf-25a1-4990-9853-a0c3f3a95608 MY_CLONE 00:14:36.230 23:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ae4a5262-299c-4771-9681-c9a468d0ce90 00:14:36.230 23:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ae4a5262-299c-4771-9681-c9a468d0ce90 00:14:36.801 23:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 390793 00:14:46.813 Initializing NVMe Controllers 00:14:46.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:46.813 Controller IO queue size 128, less than required. 00:14:46.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:46.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:46.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:46.813 Initialization complete. Launching workers. 00:14:46.813 ======================================================== 00:14:46.813 Latency(us) 00:14:46.813 Device Information : IOPS MiB/s Average min max 00:14:46.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12390.50 48.40 10336.29 1839.33 49230.98 00:14:46.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17327.30 67.68 7388.02 1144.39 62763.61 00:14:46.813 ======================================================== 00:14:46.813 Total : 29717.80 116.09 8617.27 1144.39 62763.61 00:14:46.813 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a764f534-c28c-470b-9b0b-0aaa005f5ee4 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 479a8aa5-c19b-4969-a0b9-365395ba9c2f 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.813 rmmod nvme_tcp 00:14:46.813 rmmod nvme_fabrics 00:14:46.813 rmmod nvme_keyring 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 390283 ']' 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 390283 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@942 -- # '[' -z 390283 ']' 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # kill -0 390283 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # uname 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 390283 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@960 -- # echo 'killing process with pid 390283' 00:14:46.813 killing process with pid 390283 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@961 -- # kill 390283 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # wait 390283 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.813 23:51:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.209 23:51:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:48.209 00:14:48.209 real 0m24.141s 00:14:48.209 user 1m4.245s 00:14:48.209 sys 0m8.332s 00:14:48.209 23:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:48.209 23:51:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:48.209 ************************************ 00:14:48.209 END TEST nvmf_lvol 00:14:48.209 ************************************ 00:14:48.209 23:51:03 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:48.210 23:51:03 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:48.210 23:51:03 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:48.210 23:51:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:48.210 23:51:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:48.210 ************************************ 00:14:48.210 START TEST nvmf_lvs_grow 00:14:48.210 ************************************ 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:48.210 * Looking for test storage... 00:14:48.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.210 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:48.211 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:48.211 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:48.211 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.211 23:51:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.211 23:51:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.211 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:48.211 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:48.211 23:51:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:48.211 23:51:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:56.367 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:56.367 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:56.367 Found net devices under 0000:31:00.0: cvl_0_0 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.367 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:56.368 Found net devices under 0000:31:00.1: cvl_0_1 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:14:56.368 00:14:56.368 --- 10.0.0.2 ping statistics --- 00:14:56.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.368 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:14:56.368 00:14:56.368 --- 10.0.0.1 ping statistics --- 00:14:56.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.368 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=398239 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 398239 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # '[' -z 398239 ']' 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:56.368 23:51:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:56.368 [2024-07-15 23:51:11.546061] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:14:56.368 [2024-07-15 23:51:11.546127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.629 [2024-07-15 23:51:11.625646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.629 [2024-07-15 23:51:11.698824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.629 [2024-07-15 23:51:11.698863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.629 [2024-07-15 23:51:11.698871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.629 [2024-07-15 23:51:11.698877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.629 [2024-07-15 23:51:11.698883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.629 [2024-07-15 23:51:11.698901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.201 23:51:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:57.201 23:51:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # return 0 00:14:57.201 23:51:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.201 23:51:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.201 23:51:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:57.201 23:51:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.201 23:51:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:57.461 [2024-07-15 23:51:12.486069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:57.461 ************************************ 00:14:57.461 START TEST lvs_grow_clean 00:14:57.461 ************************************ 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1117 -- # lvs_grow 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.461 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.723 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:57.723 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:57.723 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1a34dea1-dfc1-492d-b723-c42e72b17f12 00:14:57.723 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:14:57.723 23:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:57.984 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:57.984 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:57.984 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 lvol 150 00:14:57.984 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dc048c3b-4ccb-4a69-9042-6ce3a5ee4433 00:14:57.984 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.984 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:58.245 [2024-07-15 23:51:13.292179] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:58.245 [2024-07-15 23:51:13.292235] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:58.245 true 00:14:58.245 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:14:58.245 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:58.506 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:58.506 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:58.506 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc048c3b-4ccb-4a69-9042-6ce3a5ee4433 00:14:58.766 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:58.766 [2024-07-15 23:51:13.890011] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.766 23:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=398750 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 398750 /var/tmp/bdevperf.sock 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@823 -- # '[' -z 398750 ']' 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:59.026 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:59.026 [2024-07-15 23:51:14.106408] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:14:59.026 [2024-07-15 23:51:14.106460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398750 ] 00:14:59.026 [2024-07-15 23:51:14.189668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.285 [2024-07-15 23:51:14.254018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.856 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:59.856 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # return 0 00:14:59.856 23:51:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:00.118 Nvme0n1 00:15:00.118 23:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:00.118 [ 00:15:00.118 { 00:15:00.118 "name": "Nvme0n1", 00:15:00.118 "aliases": [ 00:15:00.118 "dc048c3b-4ccb-4a69-9042-6ce3a5ee4433" 00:15:00.118 ], 00:15:00.118 "product_name": "NVMe disk", 00:15:00.118 "block_size": 4096, 00:15:00.118 "num_blocks": 38912, 00:15:00.118 "uuid": "dc048c3b-4ccb-4a69-9042-6ce3a5ee4433", 00:15:00.118 "assigned_rate_limits": { 00:15:00.118 "rw_ios_per_sec": 0, 00:15:00.118 "rw_mbytes_per_sec": 0, 00:15:00.118 "r_mbytes_per_sec": 0, 00:15:00.118 "w_mbytes_per_sec": 0 00:15:00.118 }, 00:15:00.118 "claimed": false, 00:15:00.118 "zoned": false, 00:15:00.118 "supported_io_types": { 00:15:00.118 "read": true, 00:15:00.118 "write": true, 00:15:00.118 "unmap": true, 00:15:00.118 "flush": true, 00:15:00.118 "reset": true, 00:15:00.118 "nvme_admin": true, 00:15:00.118 "nvme_io": true, 00:15:00.118 "nvme_io_md": false, 00:15:00.118 "write_zeroes": true, 00:15:00.118 "zcopy": false, 00:15:00.118 "get_zone_info": false, 00:15:00.118 "zone_management": false, 00:15:00.118 "zone_append": false, 00:15:00.118 "compare": true, 00:15:00.118 "compare_and_write": true, 00:15:00.118 "abort": true, 00:15:00.118 "seek_hole": false, 00:15:00.118 "seek_data": false, 00:15:00.118 "copy": true, 00:15:00.118 "nvme_iov_md": false 00:15:00.118 }, 00:15:00.118 "memory_domains": [ 00:15:00.118 { 00:15:00.118 "dma_device_id": "system", 00:15:00.118 "dma_device_type": 1 00:15:00.118 } 00:15:00.118 ], 00:15:00.118 "driver_specific": { 00:15:00.118 "nvme": [ 00:15:00.118 { 00:15:00.118 "trid": { 00:15:00.118 "trtype": "TCP", 00:15:00.118 "adrfam": "IPv4", 00:15:00.118 "traddr": "10.0.0.2", 00:15:00.118 "trsvcid": "4420", 00:15:00.118 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:00.118 }, 00:15:00.118 "ctrlr_data": { 00:15:00.118 "cntlid": 1, 00:15:00.118 "vendor_id": "0x8086", 00:15:00.118 "model_number": "SPDK bdev Controller", 00:15:00.118 "serial_number": "SPDK0", 00:15:00.118 "firmware_revision": "24.09", 00:15:00.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.118 "oacs": { 00:15:00.118 "security": 0, 00:15:00.118 "format": 0, 00:15:00.118 "firmware": 0, 00:15:00.118 "ns_manage": 0 00:15:00.118 }, 00:15:00.118 "multi_ctrlr": true, 00:15:00.118 "ana_reporting": false 00:15:00.118 }, 00:15:00.118 "vs": { 00:15:00.118 "nvme_version": "1.3" 00:15:00.118 }, 00:15:00.118 "ns_data": { 00:15:00.118 "id": 1, 00:15:00.118 "can_share": true 00:15:00.118 } 00:15:00.118 } 00:15:00.118 ], 00:15:00.118 "mp_policy": "active_passive" 00:15:00.118 } 00:15:00.118 } 00:15:00.118 ] 00:15:00.118 23:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=399090 00:15:00.118 23:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:00.118 23:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.379 Running I/O for 10 seconds... 00:15:01.321 Latency(us) 00:15:01.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.321 Nvme0n1 : 1.00 17921.00 70.00 0.00 0.00 0.00 0.00 0.00 00:15:01.321 =================================================================================================================== 00:15:01.321 Total : 17921.00 70.00 0.00 0.00 0.00 0.00 0.00 00:15:01.321 00:15:02.262 23:51:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:15:02.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.262 Nvme0n1 : 2.00 18080.00 70.62 0.00 0.00 0.00 0.00 0.00 00:15:02.262 =================================================================================================================== 00:15:02.262 Total : 18080.00 70.62 0.00 0.00 0.00 0.00 0.00 00:15:02.262 00:15:02.262 true 00:15:02.262 23:51:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:15:02.262 23:51:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:02.521 23:51:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:02.521 23:51:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:02.522 23:51:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 399090 00:15:03.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.459 Nvme0n1 : 3.00 18133.00 70.83 0.00 0.00 0.00 0.00 0.00 00:15:03.459 =================================================================================================================== 00:15:03.459 Total : 18133.00 70.83 0.00 0.00 0.00 0.00 0.00 00:15:03.459 00:15:04.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.397 Nvme0n1 : 4.00 18164.25 70.95 0.00 0.00 0.00 0.00 0.00 00:15:04.397 =================================================================================================================== 00:15:04.397 Total : 18164.25 70.95 0.00 0.00 0.00 0.00 0.00 00:15:04.397 00:15:05.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.336 Nvme0n1 : 5.00 18188.40 71.05 0.00 0.00 0.00 0.00 0.00 00:15:05.336 =================================================================================================================== 00:15:05.336 Total : 18188.40 71.05 0.00 0.00 0.00 0.00 0.00 00:15:05.336 00:15:06.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.278 Nvme0n1 : 6.00 18207.67 71.12 0.00 0.00 0.00 0.00 0.00 00:15:06.278 =================================================================================================================== 00:15:06.278 Total : 18207.67 71.12 0.00 0.00 0.00 0.00 0.00 00:15:06.278 00:15:07.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.218 Nvme0n1 : 7.00 18230.57 71.21 0.00 0.00 0.00 0.00 0.00 00:15:07.218 =================================================================================================================== 00:15:07.218 Total : 18230.57 71.21 0.00 0.00 0.00 0.00 0.00 00:15:07.218 00:15:08.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.603 Nvme0n1 : 8.00 18239.62 71.25 0.00 0.00 0.00 0.00 0.00 00:15:08.603 =================================================================================================================== 00:15:08.603 Total : 18239.62 71.25 0.00 0.00 0.00 0.00 0.00 00:15:08.603 00:15:09.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.544 Nvme0n1 : 9.00 18253.89 71.30 0.00 0.00 0.00 0.00 0.00 00:15:09.544 =================================================================================================================== 00:15:09.544 Total : 18253.89 71.30 0.00 0.00 0.00 0.00 0.00 00:15:09.544 00:15:10.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.485 Nvme0n1 : 10.00 18265.40 71.35 0.00 0.00 0.00 0.00 0.00 00:15:10.485 =================================================================================================================== 00:15:10.485 Total : 18265.40 71.35 0.00 0.00 0.00 0.00 0.00 00:15:10.485 00:15:10.485 00:15:10.485 Latency(us) 00:15:10.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.485 Nvme0n1 : 10.00 18264.45 71.35 0.00 0.00 7006.14 4177.92 17476.27 00:15:10.485 =================================================================================================================== 00:15:10.485 Total : 18264.45 71.35 0.00 0.00 7006.14 4177.92 17476.27 00:15:10.485 0 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 398750 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@942 -- # '[' -z 398750 ']' 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # kill -0 398750 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # uname 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 398750 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 398750' 00:15:10.485 killing process with pid 398750 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@961 -- # kill 398750 00:15:10.485 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.485 00:15:10.485 Latency(us) 00:15:10.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.485 =================================================================================================================== 00:15:10.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # wait 398750 00:15:10.485 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.746 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:10.746 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:15:10.746 23:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:11.006 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:11.006 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:11.006 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:11.006 [2024-07-15 23:51:26.175724] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # local es=0 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:15:11.268 request: 00:15:11.268 { 00:15:11.268 "uuid": "1a34dea1-dfc1-492d-b723-c42e72b17f12", 00:15:11.268 "method": "bdev_lvol_get_lvstores", 00:15:11.268 "req_id": 1 00:15:11.268 } 00:15:11.268 Got JSON-RPC error response 00:15:11.268 response: 00:15:11.268 { 00:15:11.268 "code": -19, 00:15:11.268 "message": "No such device" 00:15:11.268 } 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # es=1 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:15:11.268 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:11.528 aio_bdev 00:15:11.528 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dc048c3b-4ccb-4a69-9042-6ce3a5ee4433 00:15:11.528 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@891 -- # local bdev_name=dc048c3b-4ccb-4a69-9042-6ce3a5ee4433 00:15:11.528 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:11.528 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@893 -- # local i 00:15:11.528 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:11.528 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:11.528 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:11.528 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dc048c3b-4ccb-4a69-9042-6ce3a5ee4433 -t 2000 00:15:11.790 [ 00:15:11.790 { 00:15:11.790 "name": "dc048c3b-4ccb-4a69-9042-6ce3a5ee4433", 00:15:11.790 "aliases": [ 00:15:11.790 "lvs/lvol" 00:15:11.790 ], 00:15:11.790 "product_name": "Logical Volume", 00:15:11.790 "block_size": 4096, 00:15:11.790 "num_blocks": 38912, 00:15:11.790 "uuid": "dc048c3b-4ccb-4a69-9042-6ce3a5ee4433", 00:15:11.790 "assigned_rate_limits": { 00:15:11.790 "rw_ios_per_sec": 0, 00:15:11.790 "rw_mbytes_per_sec": 0, 00:15:11.790 "r_mbytes_per_sec": 0, 00:15:11.790 "w_mbytes_per_sec": 0 00:15:11.790 }, 00:15:11.790 "claimed": false, 00:15:11.790 "zoned": false, 00:15:11.790 "supported_io_types": { 00:15:11.790 "read": true, 00:15:11.790 "write": true, 00:15:11.790 "unmap": true, 00:15:11.790 "flush": false, 00:15:11.790 "reset": true, 00:15:11.790 "nvme_admin": false, 00:15:11.790 "nvme_io": false, 00:15:11.790 "nvme_io_md": false, 00:15:11.790 "write_zeroes": true, 00:15:11.790 "zcopy": false, 00:15:11.790 "get_zone_info": false, 00:15:11.790 "zone_management": false, 00:15:11.790 "zone_append": false, 00:15:11.790 "compare": false, 00:15:11.790 "compare_and_write": false, 00:15:11.790 "abort": false, 00:15:11.790 "seek_hole": true, 00:15:11.790 "seek_data": true, 00:15:11.790 "copy": false, 00:15:11.790 "nvme_iov_md": false 00:15:11.790 }, 00:15:11.790 "driver_specific": { 00:15:11.790 "lvol": { 00:15:11.790 "lvol_store_uuid": "1a34dea1-dfc1-492d-b723-c42e72b17f12", 00:15:11.790 "base_bdev": "aio_bdev", 00:15:11.790 "thin_provision": false, 00:15:11.790 "num_allocated_clusters": 38, 00:15:11.790 "snapshot": false, 00:15:11.790 "clone": false, 00:15:11.790 "esnap_clone": false 00:15:11.790 } 00:15:11.790 } 00:15:11.790 } 00:15:11.790 ] 00:15:11.790 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # return 0 00:15:11.790 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:15:11.790 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:11.790 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:11.790 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:15:11.790 23:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:12.052 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:12.052 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dc048c3b-4ccb-4a69-9042-6ce3a5ee4433 00:15:12.315 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a34dea1-dfc1-492d-b723-c42e72b17f12 00:15:12.315 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.642 00:15:12.642 real 0m15.087s 00:15:12.642 user 0m14.894s 00:15:12.642 sys 0m1.227s 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:12.642 ************************************ 00:15:12.642 END TEST lvs_grow_clean 00:15:12.642 ************************************ 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:12.642 ************************************ 00:15:12.642 START TEST lvs_grow_dirty 00:15:12.642 ************************************ 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1117 -- # lvs_grow dirty 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.642 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.904 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:12.904 23:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:12.904 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:12.904 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:12.904 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:13.165 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:13.165 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:13.165 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 lvol 150 00:15:13.426 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=37b55e18-dd4a-4ae2-9c34-76b72c3e126b 00:15:13.426 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:13.426 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:13.426 [2024-07-15 23:51:28.500813] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:13.426 [2024-07-15 23:51:28.500865] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:13.426 true 00:15:13.426 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:13.426 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:13.687 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:13.687 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:13.687 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 37b55e18-dd4a-4ae2-9c34-76b72c3e126b 00:15:13.948 23:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:13.948 [2024-07-15 23:51:29.086596] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.948 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=401832 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 401832 /var/tmp/bdevperf.sock 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 401832 ']' 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:14.209 23:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:14.209 [2024-07-15 23:51:29.295454] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:14.209 [2024-07-15 23:51:29.295505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401832 ] 00:15:14.209 [2024-07-15 23:51:29.373757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.470 [2024-07-15 23:51:29.427514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.042 23:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:15.042 23:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:15:15.042 23:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:15.302 Nvme0n1 00:15:15.302 23:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:15.564 [ 00:15:15.564 { 00:15:15.564 "name": "Nvme0n1", 00:15:15.564 "aliases": [ 00:15:15.564 "37b55e18-dd4a-4ae2-9c34-76b72c3e126b" 00:15:15.564 ], 00:15:15.564 "product_name": "NVMe disk", 00:15:15.564 "block_size": 4096, 00:15:15.564 "num_blocks": 38912, 00:15:15.564 "uuid": "37b55e18-dd4a-4ae2-9c34-76b72c3e126b", 00:15:15.564 "assigned_rate_limits": { 00:15:15.564 "rw_ios_per_sec": 0, 00:15:15.564 "rw_mbytes_per_sec": 0, 00:15:15.564 "r_mbytes_per_sec": 0, 00:15:15.564 "w_mbytes_per_sec": 0 00:15:15.564 }, 00:15:15.564 "claimed": false, 00:15:15.564 "zoned": false, 00:15:15.564 "supported_io_types": { 00:15:15.564 "read": true, 00:15:15.564 "write": true, 00:15:15.564 "unmap": true, 00:15:15.564 "flush": true, 00:15:15.564 "reset": true, 00:15:15.564 "nvme_admin": true, 00:15:15.564 "nvme_io": true, 00:15:15.564 "nvme_io_md": false, 00:15:15.564 "write_zeroes": true, 00:15:15.564 "zcopy": false, 00:15:15.564 "get_zone_info": false, 00:15:15.564 "zone_management": false, 00:15:15.564 "zone_append": false, 00:15:15.564 "compare": true, 00:15:15.564 "compare_and_write": true, 00:15:15.564 "abort": true, 00:15:15.564 "seek_hole": false, 00:15:15.564 "seek_data": false, 00:15:15.564 "copy": true, 00:15:15.564 "nvme_iov_md": false 00:15:15.564 }, 00:15:15.564 "memory_domains": [ 00:15:15.564 { 00:15:15.564 "dma_device_id": "system", 00:15:15.564 "dma_device_type": 1 00:15:15.564 } 00:15:15.564 ], 00:15:15.564 "driver_specific": { 00:15:15.564 "nvme": [ 00:15:15.564 { 00:15:15.564 "trid": { 00:15:15.564 "trtype": "TCP", 00:15:15.564 "adrfam": "IPv4", 00:15:15.564 "traddr": "10.0.0.2", 00:15:15.564 "trsvcid": "4420", 00:15:15.564 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:15.564 }, 00:15:15.564 "ctrlr_data": { 00:15:15.564 "cntlid": 1, 00:15:15.564 "vendor_id": "0x8086", 00:15:15.564 "model_number": "SPDK bdev Controller", 00:15:15.564 "serial_number": "SPDK0", 00:15:15.564 "firmware_revision": "24.09", 00:15:15.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:15.564 "oacs": { 00:15:15.564 "security": 0, 00:15:15.564 "format": 0, 00:15:15.564 "firmware": 0, 00:15:15.564 "ns_manage": 0 00:15:15.564 }, 00:15:15.564 "multi_ctrlr": true, 00:15:15.564 "ana_reporting": false 00:15:15.564 }, 00:15:15.564 "vs": { 00:15:15.564 "nvme_version": "1.3" 00:15:15.564 }, 00:15:15.564 "ns_data": { 00:15:15.564 "id": 1, 00:15:15.564 "can_share": true 00:15:15.564 } 00:15:15.564 } 00:15:15.564 ], 00:15:15.564 "mp_policy": "active_passive" 00:15:15.564 } 00:15:15.564 } 00:15:15.564 ] 00:15:15.564 23:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=402110 00:15:15.564 23:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:15.564 23:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.564 Running I/O for 10 seconds... 00:15:16.507 Latency(us) 00:15:16.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.507 Nvme0n1 : 1.00 18045.00 70.49 0.00 0.00 0.00 0.00 0.00 00:15:16.507 =================================================================================================================== 00:15:16.507 Total : 18045.00 70.49 0.00 0.00 0.00 0.00 0.00 00:15:16.507 00:15:17.450 23:51:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:17.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.450 Nvme0n1 : 2.00 18178.50 71.01 0.00 0.00 0.00 0.00 0.00 00:15:17.450 =================================================================================================================== 00:15:17.450 Total : 18178.50 71.01 0.00 0.00 0.00 0.00 0.00 00:15:17.450 00:15:17.710 true 00:15:17.710 23:51:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:17.710 23:51:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:17.710 23:51:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:17.711 23:51:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:17.711 23:51:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 402110 00:15:18.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.650 Nvme0n1 : 3.00 18223.00 71.18 0.00 0.00 0.00 0.00 0.00 00:15:18.650 =================================================================================================================== 00:15:18.650 Total : 18223.00 71.18 0.00 0.00 0.00 0.00 0.00 00:15:18.650 00:15:19.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.593 Nvme0n1 : 4.00 18254.00 71.30 0.00 0.00 0.00 0.00 0.00 00:15:19.593 =================================================================================================================== 00:15:19.593 Total : 18254.00 71.30 0.00 0.00 0.00 0.00 0.00 00:15:19.593 00:15:20.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.535 Nvme0n1 : 5.00 18285.40 71.43 0.00 0.00 0.00 0.00 0.00 00:15:20.535 =================================================================================================================== 00:15:20.535 Total : 18285.40 71.43 0.00 0.00 0.00 0.00 0.00 00:15:20.535 00:15:21.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.478 Nvme0n1 : 6.00 18295.50 71.47 0.00 0.00 0.00 0.00 0.00 00:15:21.478 =================================================================================================================== 00:15:21.478 Total : 18295.50 71.47 0.00 0.00 0.00 0.00 0.00 00:15:21.478 00:15:22.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.864 Nvme0n1 : 7.00 18315.00 71.54 0.00 0.00 0.00 0.00 0.00 00:15:22.864 =================================================================================================================== 00:15:22.864 Total : 18315.00 71.54 0.00 0.00 0.00 0.00 0.00 00:15:22.864 00:15:23.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.807 Nvme0n1 : 8.00 18326.62 71.59 0.00 0.00 0.00 0.00 0.00 00:15:23.807 =================================================================================================================== 00:15:23.807 Total : 18326.62 71.59 0.00 0.00 0.00 0.00 0.00 00:15:23.807 00:15:24.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.749 Nvme0n1 : 9.00 18341.78 71.65 0.00 0.00 0.00 0.00 0.00 00:15:24.749 =================================================================================================================== 00:15:24.749 Total : 18341.78 71.65 0.00 0.00 0.00 0.00 0.00 00:15:24.749 00:15:25.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.690 Nvme0n1 : 10.00 18345.80 71.66 0.00 0.00 0.00 0.00 0.00 00:15:25.690 =================================================================================================================== 00:15:25.690 Total : 18345.80 71.66 0.00 0.00 0.00 0.00 0.00 00:15:25.690 00:15:25.690 00:15:25.690 Latency(us) 00:15:25.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.690 Nvme0n1 : 10.00 18347.48 71.67 0.00 0.00 6973.17 4150.61 13926.40 00:15:25.690 =================================================================================================================== 00:15:25.690 Total : 18347.48 71.67 0.00 0.00 6973.17 4150.61 13926.40 00:15:25.690 0 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 401832 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@942 -- # '[' -z 401832 ']' 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # kill -0 401832 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # uname 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 401832 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # echo 'killing process with pid 401832' 00:15:25.690 killing process with pid 401832 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@961 -- # kill 401832 00:15:25.690 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.690 00:15:25.690 Latency(us) 00:15:25.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.690 =================================================================================================================== 00:15:25.690 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # wait 401832 00:15:25.690 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.951 23:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 398239 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 398239 00:15:26.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 398239 Killed "${NVMF_APP[@]}" "$@" 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=404195 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 404195 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 404195 ']' 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:26.212 23:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:26.473 [2024-07-15 23:51:41.411550] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:26.473 [2024-07-15 23:51:41.411606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.473 [2024-07-15 23:51:41.485791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.473 [2024-07-15 23:51:41.551755] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.473 [2024-07-15 23:51:41.551792] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.473 [2024-07-15 23:51:41.551799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.473 [2024-07-15 23:51:41.551806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.473 [2024-07-15 23:51:41.551812] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.473 [2024-07-15 23:51:41.551829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.045 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:27.045 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:15:27.045 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.045 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:27.045 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:27.045 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.045 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:27.306 [2024-07-15 23:51:42.344851] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:27.306 [2024-07-15 23:51:42.344941] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:27.306 [2024-07-15 23:51:42.344969] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:27.306 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:27.306 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 37b55e18-dd4a-4ae2-9c34-76b72c3e126b 00:15:27.306 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=37b55e18-dd4a-4ae2-9c34-76b72c3e126b 00:15:27.306 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:27.306 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:15:27.306 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:27.306 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:27.306 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:27.566 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 37b55e18-dd4a-4ae2-9c34-76b72c3e126b -t 2000 00:15:27.566 [ 00:15:27.566 { 00:15:27.566 "name": "37b55e18-dd4a-4ae2-9c34-76b72c3e126b", 00:15:27.566 "aliases": [ 00:15:27.566 "lvs/lvol" 00:15:27.567 ], 00:15:27.567 "product_name": "Logical Volume", 00:15:27.567 "block_size": 4096, 00:15:27.567 "num_blocks": 38912, 00:15:27.567 "uuid": "37b55e18-dd4a-4ae2-9c34-76b72c3e126b", 00:15:27.567 "assigned_rate_limits": { 00:15:27.567 "rw_ios_per_sec": 0, 00:15:27.567 "rw_mbytes_per_sec": 0, 00:15:27.567 "r_mbytes_per_sec": 0, 00:15:27.567 "w_mbytes_per_sec": 0 00:15:27.567 }, 00:15:27.567 "claimed": false, 00:15:27.567 "zoned": false, 00:15:27.567 "supported_io_types": { 00:15:27.567 "read": true, 00:15:27.567 "write": true, 00:15:27.567 "unmap": true, 00:15:27.567 "flush": false, 00:15:27.567 "reset": true, 00:15:27.567 "nvme_admin": false, 00:15:27.567 "nvme_io": false, 00:15:27.567 "nvme_io_md": false, 00:15:27.567 "write_zeroes": true, 00:15:27.567 "zcopy": false, 00:15:27.567 "get_zone_info": false, 00:15:27.567 "zone_management": false, 00:15:27.567 "zone_append": false, 00:15:27.567 "compare": false, 00:15:27.567 "compare_and_write": false, 00:15:27.567 "abort": false, 00:15:27.567 "seek_hole": true, 00:15:27.567 "seek_data": true, 00:15:27.567 "copy": false, 00:15:27.567 "nvme_iov_md": false 00:15:27.567 }, 00:15:27.567 "driver_specific": { 00:15:27.567 "lvol": { 00:15:27.567 "lvol_store_uuid": "0da437fa-251d-4ff8-9f80-1061e11c43f4", 00:15:27.567 "base_bdev": "aio_bdev", 00:15:27.567 "thin_provision": false, 00:15:27.567 "num_allocated_clusters": 38, 00:15:27.567 "snapshot": false, 00:15:27.567 "clone": false, 00:15:27.567 "esnap_clone": false 00:15:27.567 } 00:15:27.567 } 00:15:27.567 } 00:15:27.567 ] 00:15:27.567 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:15:27.567 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:27.567 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:27.826 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:27.826 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:27.826 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:27.826 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:27.826 23:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:28.085 [2024-07-15 23:51:43.092662] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # local es=0 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:28.085 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:28.344 request: 00:15:28.345 { 00:15:28.345 "uuid": "0da437fa-251d-4ff8-9f80-1061e11c43f4", 00:15:28.345 "method": "bdev_lvol_get_lvstores", 00:15:28.345 "req_id": 1 00:15:28.345 } 00:15:28.345 Got JSON-RPC error response 00:15:28.345 response: 00:15:28.345 { 00:15:28.345 "code": -19, 00:15:28.345 "message": "No such device" 00:15:28.345 } 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # es=1 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:28.345 aio_bdev 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 37b55e18-dd4a-4ae2-9c34-76b72c3e126b 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=37b55e18-dd4a-4ae2-9c34-76b72c3e126b 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:15:28.345 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:28.604 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 37b55e18-dd4a-4ae2-9c34-76b72c3e126b -t 2000 00:15:28.604 [ 00:15:28.604 { 00:15:28.604 "name": "37b55e18-dd4a-4ae2-9c34-76b72c3e126b", 00:15:28.604 "aliases": [ 00:15:28.604 "lvs/lvol" 00:15:28.604 ], 00:15:28.604 "product_name": "Logical Volume", 00:15:28.604 "block_size": 4096, 00:15:28.604 "num_blocks": 38912, 00:15:28.604 "uuid": "37b55e18-dd4a-4ae2-9c34-76b72c3e126b", 00:15:28.604 "assigned_rate_limits": { 00:15:28.604 "rw_ios_per_sec": 0, 00:15:28.604 "rw_mbytes_per_sec": 0, 00:15:28.604 "r_mbytes_per_sec": 0, 00:15:28.604 "w_mbytes_per_sec": 0 00:15:28.604 }, 00:15:28.604 "claimed": false, 00:15:28.604 "zoned": false, 00:15:28.604 "supported_io_types": { 00:15:28.604 "read": true, 00:15:28.604 "write": true, 00:15:28.604 "unmap": true, 00:15:28.604 "flush": false, 00:15:28.604 "reset": true, 00:15:28.604 "nvme_admin": false, 00:15:28.604 "nvme_io": false, 00:15:28.604 "nvme_io_md": false, 00:15:28.604 "write_zeroes": true, 00:15:28.604 "zcopy": false, 00:15:28.604 "get_zone_info": false, 00:15:28.604 "zone_management": false, 00:15:28.604 "zone_append": false, 00:15:28.604 "compare": false, 00:15:28.604 "compare_and_write": false, 00:15:28.604 "abort": false, 00:15:28.604 "seek_hole": true, 00:15:28.604 "seek_data": true, 00:15:28.604 "copy": false, 00:15:28.604 "nvme_iov_md": false 00:15:28.604 }, 00:15:28.604 "driver_specific": { 00:15:28.604 "lvol": { 00:15:28.604 "lvol_store_uuid": "0da437fa-251d-4ff8-9f80-1061e11c43f4", 00:15:28.604 "base_bdev": "aio_bdev", 00:15:28.604 "thin_provision": false, 00:15:28.604 "num_allocated_clusters": 38, 00:15:28.604 "snapshot": false, 00:15:28.604 "clone": false, 00:15:28.604 "esnap_clone": false 00:15:28.604 } 00:15:28.604 } 00:15:28.604 } 00:15:28.604 ] 00:15:28.604 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:15:28.604 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:28.604 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:28.864 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:28.864 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:28.864 23:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:28.864 23:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:28.864 23:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37b55e18-dd4a-4ae2-9c34-76b72c3e126b 00:15:29.124 23:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0da437fa-251d-4ff8-9f80-1061e11c43f4 00:15:29.383 23:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:29.383 23:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:29.383 00:15:29.383 real 0m16.869s 00:15:29.383 user 0m43.229s 00:15:29.383 sys 0m3.308s 00:15:29.383 23:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:29.383 23:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:29.383 ************************************ 00:15:29.383 END TEST lvs_grow_dirty 00:15:29.383 ************************************ 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@800 -- # type=--id 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@801 -- # id=0 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # for n in $shm_files 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:29.643 nvmf_trace.0 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # return 0 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.643 rmmod nvme_tcp 00:15:29.643 rmmod nvme_fabrics 00:15:29.643 rmmod nvme_keyring 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 404195 ']' 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 404195 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@942 -- # '[' -z 404195 ']' 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # kill -0 404195 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # uname 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 404195 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # echo 'killing process with pid 404195' 00:15:29.643 killing process with pid 404195 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@961 -- # kill 404195 00:15:29.643 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # wait 404195 00:15:29.902 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.902 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.902 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.902 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.902 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.902 23:51:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.902 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.902 23:51:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.814 23:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:31.814 00:15:31.814 real 0m43.867s 00:15:31.814 user 1m4.233s 00:15:31.814 sys 0m11.087s 00:15:31.814 23:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:31.814 23:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:31.814 ************************************ 00:15:31.814 END TEST nvmf_lvs_grow 00:15:31.814 ************************************ 00:15:32.076 23:51:47 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:15:32.076 23:51:47 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:32.076 23:51:47 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:15:32.076 23:51:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:32.076 23:51:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.076 ************************************ 00:15:32.076 START TEST nvmf_bdev_io_wait 00:15:32.076 ************************************ 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:32.076 * Looking for test storage... 00:15:32.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.076 23:51:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:40.221 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:40.221 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:40.221 Found net devices under 0000:31:00.0: cvl_0_0 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:40.221 Found net devices under 0000:31:00.1: cvl_0_1 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:40.221 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:40.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.746 ms 00:15:40.222 00:15:40.222 --- 10.0.0.2 ping statistics --- 00:15:40.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.222 rtt min/avg/max/mdev = 0.746/0.746/0.746/0.000 ms 00:15:40.222 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.488 ms 00:15:40.483 00:15:40.483 --- 10.0.0.1 ping statistics --- 00:15:40.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.483 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=409607 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 409607 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@823 -- # '[' -z 409607 ']' 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:40.483 23:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:40.483 [2024-07-15 23:51:55.474817] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:40.483 [2024-07-15 23:51:55.474865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.483 [2024-07-15 23:51:55.541748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.483 [2024-07-15 23:51:55.609729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.483 [2024-07-15 23:51:55.609765] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.483 [2024-07-15 23:51:55.609773] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.483 [2024-07-15 23:51:55.609779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.483 [2024-07-15 23:51:55.609785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.483 [2024-07-15 23:51:55.609924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.483 [2024-07-15 23:51:55.610042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.483 [2024-07-15 23:51:55.610201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.483 [2024-07-15 23:51:55.610202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:41.427 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:41.427 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # return 0 00:15:41.427 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.427 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.427 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.427 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.427 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:41.427 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.427 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.428 [2024-07-15 23:51:56.374485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.428 Malloc0 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.428 [2024-07-15 23:51:56.443509] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=409814 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=409816 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:41.428 { 00:15:41.428 "params": { 00:15:41.428 "name": "Nvme$subsystem", 00:15:41.428 "trtype": "$TEST_TRANSPORT", 00:15:41.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:41.428 "adrfam": "ipv4", 00:15:41.428 "trsvcid": "$NVMF_PORT", 00:15:41.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:41.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:41.428 "hdgst": ${hdgst:-false}, 00:15:41.428 "ddgst": ${ddgst:-false} 00:15:41.428 }, 00:15:41.428 "method": "bdev_nvme_attach_controller" 00:15:41.428 } 00:15:41.428 EOF 00:15:41.428 )") 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=409818 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=409821 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:41.428 { 00:15:41.428 "params": { 00:15:41.428 "name": "Nvme$subsystem", 00:15:41.428 "trtype": "$TEST_TRANSPORT", 00:15:41.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:41.428 "adrfam": "ipv4", 00:15:41.428 "trsvcid": "$NVMF_PORT", 00:15:41.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:41.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:41.428 "hdgst": ${hdgst:-false}, 00:15:41.428 "ddgst": ${ddgst:-false} 00:15:41.428 }, 00:15:41.428 "method": "bdev_nvme_attach_controller" 00:15:41.428 } 00:15:41.428 EOF 00:15:41.428 )") 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:41.428 { 00:15:41.428 "params": { 00:15:41.428 "name": "Nvme$subsystem", 00:15:41.428 "trtype": "$TEST_TRANSPORT", 00:15:41.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:41.428 "adrfam": "ipv4", 00:15:41.428 "trsvcid": "$NVMF_PORT", 00:15:41.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:41.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:41.428 "hdgst": ${hdgst:-false}, 00:15:41.428 "ddgst": ${ddgst:-false} 00:15:41.428 }, 00:15:41.428 "method": "bdev_nvme_attach_controller" 00:15:41.428 } 00:15:41.428 EOF 00:15:41.428 )") 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:41.428 { 00:15:41.428 "params": { 00:15:41.428 "name": "Nvme$subsystem", 00:15:41.428 "trtype": "$TEST_TRANSPORT", 00:15:41.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:41.428 "adrfam": "ipv4", 00:15:41.428 "trsvcid": "$NVMF_PORT", 00:15:41.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:41.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:41.428 "hdgst": ${hdgst:-false}, 00:15:41.428 "ddgst": ${ddgst:-false} 00:15:41.428 }, 00:15:41.428 "method": "bdev_nvme_attach_controller" 00:15:41.428 } 00:15:41.428 EOF 00:15:41.428 )") 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 409814 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:41.428 "params": { 00:15:41.428 "name": "Nvme1", 00:15:41.428 "trtype": "tcp", 00:15:41.428 "traddr": "10.0.0.2", 00:15:41.428 "adrfam": "ipv4", 00:15:41.428 "trsvcid": "4420", 00:15:41.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.428 "hdgst": false, 00:15:41.428 "ddgst": false 00:15:41.428 }, 00:15:41.428 "method": "bdev_nvme_attach_controller" 00:15:41.428 }' 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:41.428 "params": { 00:15:41.428 "name": "Nvme1", 00:15:41.428 "trtype": "tcp", 00:15:41.428 "traddr": "10.0.0.2", 00:15:41.428 "adrfam": "ipv4", 00:15:41.428 "trsvcid": "4420", 00:15:41.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.428 "hdgst": false, 00:15:41.428 "ddgst": false 00:15:41.428 }, 00:15:41.428 "method": "bdev_nvme_attach_controller" 00:15:41.428 }' 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:41.428 "params": { 00:15:41.428 "name": "Nvme1", 00:15:41.428 "trtype": "tcp", 00:15:41.428 "traddr": "10.0.0.2", 00:15:41.428 "adrfam": "ipv4", 00:15:41.428 "trsvcid": "4420", 00:15:41.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.428 "hdgst": false, 00:15:41.428 "ddgst": false 00:15:41.428 }, 00:15:41.428 "method": "bdev_nvme_attach_controller" 00:15:41.428 }' 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:41.428 23:51:56 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:41.428 "params": { 00:15:41.429 "name": "Nvme1", 00:15:41.429 "trtype": "tcp", 00:15:41.429 "traddr": "10.0.0.2", 00:15:41.429 "adrfam": "ipv4", 00:15:41.429 "trsvcid": "4420", 00:15:41.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.429 "hdgst": false, 00:15:41.429 "ddgst": false 00:15:41.429 }, 00:15:41.429 "method": "bdev_nvme_attach_controller" 00:15:41.429 }' 00:15:41.429 [2024-07-15 23:51:56.497018] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:41.429 [2024-07-15 23:51:56.497059] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:41.429 [2024-07-15 23:51:56.497185] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:41.429 [2024-07-15 23:51:56.497244] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:41.429 [2024-07-15 23:51:56.498086] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:41.429 [2024-07-15 23:51:56.498130] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:41.429 [2024-07-15 23:51:56.499922] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:41.429 [2024-07-15 23:51:56.499970] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:41.689 [2024-07-15 23:51:56.635309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.689 [2024-07-15 23:51:56.683931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.689 [2024-07-15 23:51:56.685303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:41.689 [2024-07-15 23:51:56.735429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:41.689 [2024-07-15 23:51:56.744408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.689 [2024-07-15 23:51:56.792285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.689 [2024-07-15 23:51:56.797010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:41.689 [2024-07-15 23:51:56.841137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:41.949 Running I/O for 1 seconds... 00:15:41.949 Running I/O for 1 seconds... 00:15:41.949 Running I/O for 1 seconds... 00:15:41.949 Running I/O for 1 seconds... 00:15:42.890 00:15:42.890 Latency(us) 00:15:42.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.890 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:42.891 Nvme1n1 : 1.01 12526.74 48.93 0.00 0.00 10180.46 6417.07 18240.85 00:15:42.891 =================================================================================================================== 00:15:42.891 Total : 12526.74 48.93 0.00 0.00 10180.46 6417.07 18240.85 00:15:42.891 00:15:42.891 Latency(us) 00:15:42.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.891 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:42.891 Nvme1n1 : 1.01 12085.33 47.21 0.00 0.00 10557.53 5270.19 20862.29 00:15:42.891 =================================================================================================================== 00:15:42.891 Total : 12085.33 47.21 0.00 0.00 10557.53 5270.19 20862.29 00:15:42.891 00:15:42.891 Latency(us) 00:15:42.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.891 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:42.891 Nvme1n1 : 1.00 18827.46 73.54 0.00 0.00 6783.28 3345.07 16274.77 00:15:42.891 =================================================================================================================== 00:15:42.891 Total : 18827.46 73.54 0.00 0.00 6783.28 3345.07 16274.77 00:15:43.151 00:15:43.151 Latency(us) 00:15:43.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.151 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:43.151 Nvme1n1 : 1.00 188420.92 736.02 0.00 0.00 676.81 273.07 754.35 00:15:43.151 =================================================================================================================== 00:15:43.151 Total : 188420.92 736.02 0.00 0.00 676.81 273.07 754.35 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 409816 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 409818 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 409821 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:43.151 rmmod nvme_tcp 00:15:43.151 rmmod nvme_fabrics 00:15:43.151 rmmod nvme_keyring 00:15:43.151 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 409607 ']' 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 409607 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@942 -- # '[' -z 409607 ']' 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # kill -0 409607 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # uname 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 409607 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # echo 'killing process with pid 409607' 00:15:43.410 killing process with pid 409607 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@961 -- # kill 409607 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # wait 409607 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.410 23:51:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.994 23:52:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.994 00:15:45.994 real 0m13.544s 00:15:45.994 user 0m19.368s 00:15:45.994 sys 0m7.622s 00:15:45.994 23:52:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:45.994 23:52:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:45.994 ************************************ 00:15:45.994 END TEST nvmf_bdev_io_wait 00:15:45.994 ************************************ 00:15:45.994 23:52:00 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:15:45.994 23:52:00 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:45.994 23:52:00 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:15:45.994 23:52:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:45.994 23:52:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:45.994 ************************************ 00:15:45.994 START TEST nvmf_queue_depth 00:15:45.994 ************************************ 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:45.994 * Looking for test storage... 00:15:45.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.994 23:52:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:54.191 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:54.191 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:54.191 Found net devices under 0000:31:00.0: cvl_0_0 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.191 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:54.191 Found net devices under 0000:31:00.1: cvl_0_1 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:54.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:15:54.192 00:15:54.192 --- 10.0.0.2 ping statistics --- 00:15:54.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.192 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:54.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:15:54.192 00:15:54.192 --- 10.0.0.1 ping statistics --- 00:15:54.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.192 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=414864 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 414864 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 414864 ']' 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:54.192 23:52:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.192 [2024-07-15 23:52:08.945094] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:54.192 [2024-07-15 23:52:08.945149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.192 [2024-07-15 23:52:09.037942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.192 [2024-07-15 23:52:09.131187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.192 [2024-07-15 23:52:09.131258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.192 [2024-07-15 23:52:09.131268] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.192 [2024-07-15 23:52:09.131275] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.192 [2024-07-15 23:52:09.131281] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.192 [2024-07-15 23:52:09.131306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.766 [2024-07-15 23:52:09.774531] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:54.766 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.767 Malloc0 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.767 [2024-07-15 23:52:09.828181] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=415034 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 415034 /var/tmp/bdevperf.sock 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 415034 ']' 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:54.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:54.767 23:52:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.767 [2024-07-15 23:52:09.883031] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:15:54.767 [2024-07-15 23:52:09.883094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415034 ] 00:15:54.767 [2024-07-15 23:52:09.953384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.028 [2024-07-15 23:52:10.028652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.599 23:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:55.599 23:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:15:55.599 23:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:55.599 23:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:55.599 23:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:55.599 NVMe0n1 00:15:55.600 23:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:55.600 23:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:55.860 Running I/O for 10 seconds... 00:16:05.854 00:16:05.854 Latency(us) 00:16:05.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.854 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:05.855 Verification LBA range: start 0x0 length 0x4000 00:16:05.855 NVMe0n1 : 10.07 11364.23 44.39 0.00 0.00 89745.60 24794.45 72526.51 00:16:05.855 =================================================================================================================== 00:16:05.855 Total : 11364.23 44.39 0.00 0.00 89745.60 24794.45 72526.51 00:16:05.855 0 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 415034 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 415034 ']' 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 415034 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 415034 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 415034' 00:16:05.855 killing process with pid 415034 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 415034 00:16:05.855 Received shutdown signal, test time was about 10.000000 seconds 00:16:05.855 00:16:05.855 Latency(us) 00:16:05.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.855 =================================================================================================================== 00:16:05.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.855 23:52:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 415034 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:06.114 rmmod nvme_tcp 00:16:06.114 rmmod nvme_fabrics 00:16:06.114 rmmod nvme_keyring 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 414864 ']' 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 414864 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 414864 ']' 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 414864 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 414864 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 414864' 00:16:06.114 killing process with pid 414864 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 414864 00:16:06.114 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 414864 00:16:06.374 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.374 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.374 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.374 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.374 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.374 23:52:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.374 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.374 23:52:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.298 23:52:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.298 00:16:08.298 real 0m22.753s 00:16:08.298 user 0m25.600s 00:16:08.298 sys 0m7.171s 00:16:08.298 23:52:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:08.298 23:52:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:08.298 ************************************ 00:16:08.298 END TEST nvmf_queue_depth 00:16:08.298 ************************************ 00:16:08.298 23:52:23 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:16:08.298 23:52:23 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:08.298 23:52:23 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:16:08.298 23:52:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:08.298 23:52:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.563 ************************************ 00:16:08.563 START TEST nvmf_target_multipath 00:16:08.563 ************************************ 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:08.563 * Looking for test storage... 00:16:08.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:08.563 23:52:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.564 23:52:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:16.705 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.705 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:16.705 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:16.705 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:16.705 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:16.705 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:16.706 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:16.706 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:16.706 Found net devices under 0000:31:00.0: cvl_0_0 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:16.706 Found net devices under 0000:31:00.1: cvl_0_1 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:16.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:16:16.706 00:16:16.706 --- 10.0.0.2 ping statistics --- 00:16:16.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.706 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:16:16.706 00:16:16.706 --- 10.0.0.1 ping statistics --- 00:16:16.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.706 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:16.706 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:16.967 only one NIC for nvmf test 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.967 rmmod nvme_tcp 00:16:16.967 rmmod nvme_fabrics 00:16:16.967 rmmod nvme_keyring 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.967 23:52:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.881 23:52:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.142 23:52:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:19.142 00:16:19.142 real 0m10.565s 00:16:19.142 user 0m2.311s 00:16:19.142 sys 0m6.163s 00:16:19.142 23:52:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:19.142 23:52:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:19.142 ************************************ 00:16:19.142 END TEST nvmf_target_multipath 00:16:19.142 ************************************ 00:16:19.142 23:52:34 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:16:19.142 23:52:34 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:19.142 23:52:34 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:16:19.142 23:52:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:19.142 23:52:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.142 ************************************ 00:16:19.142 START TEST nvmf_zcopy 00:16:19.142 ************************************ 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:19.142 * Looking for test storage... 00:16:19.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.142 23:52:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:27.287 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:27.287 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.287 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:27.288 Found net devices under 0000:31:00.0: cvl_0_0 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:27.288 Found net devices under 0000:31:00.1: cvl_0_1 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:27.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:16:27.288 00:16:27.288 --- 10.0.0.2 ping statistics --- 00:16:27.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.288 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:16:27.288 00:16:27.288 --- 10.0.0.1 ping statistics --- 00:16:27.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.288 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=426715 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 426715 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@823 -- # '[' -z 426715 ']' 00:16:27.288 23:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.548 23:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:27.548 23:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.548 23:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:27.548 23:52:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:27.548 [2024-07-15 23:52:42.525312] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:16:27.548 [2024-07-15 23:52:42.525359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.548 [2024-07-15 23:52:42.616067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.548 [2024-07-15 23:52:42.691958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.548 [2024-07-15 23:52:42.692015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.548 [2024-07-15 23:52:42.692023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.548 [2024-07-15 23:52:42.692030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.548 [2024-07-15 23:52:42.692036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.548 [2024-07-15 23:52:42.692063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.118 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:28.118 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # return 0 00:16:28.118 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.118 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:28.118 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 [2024-07-15 23:52:43.353994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 [2024-07-15 23:52:43.378258] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 malloc0 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:28.377 { 00:16:28.377 "params": { 00:16:28.377 "name": "Nvme$subsystem", 00:16:28.377 "trtype": "$TEST_TRANSPORT", 00:16:28.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.377 "adrfam": "ipv4", 00:16:28.377 "trsvcid": "$NVMF_PORT", 00:16:28.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.377 "hdgst": ${hdgst:-false}, 00:16:28.377 "ddgst": ${ddgst:-false} 00:16:28.377 }, 00:16:28.377 "method": "bdev_nvme_attach_controller" 00:16:28.377 } 00:16:28.377 EOF 00:16:28.377 )") 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:28.377 23:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:28.377 "params": { 00:16:28.377 "name": "Nvme1", 00:16:28.377 "trtype": "tcp", 00:16:28.377 "traddr": "10.0.0.2", 00:16:28.377 "adrfam": "ipv4", 00:16:28.377 "trsvcid": "4420", 00:16:28.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.377 "hdgst": false, 00:16:28.377 "ddgst": false 00:16:28.377 }, 00:16:28.377 "method": "bdev_nvme_attach_controller" 00:16:28.377 }' 00:16:28.377 [2024-07-15 23:52:43.478507] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:16:28.377 [2024-07-15 23:52:43.478574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426767 ] 00:16:28.377 [2024-07-15 23:52:43.549432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.637 [2024-07-15 23:52:43.623281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.637 Running I/O for 10 seconds... 00:16:38.632 00:16:38.632 Latency(us) 00:16:38.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.632 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:38.632 Verification LBA range: start 0x0 length 0x1000 00:16:38.632 Nvme1n1 : 10.01 9402.45 73.46 0.00 0.00 13560.88 1460.91 25777.49 00:16:38.632 =================================================================================================================== 00:16:38.632 Total : 9402.45 73.46 0.00 0.00 13560.88 1460.91 25777.49 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=428824 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:38.892 { 00:16:38.892 "params": { 00:16:38.892 "name": "Nvme$subsystem", 00:16:38.892 "trtype": "$TEST_TRANSPORT", 00:16:38.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:38.892 "adrfam": "ipv4", 00:16:38.892 "trsvcid": "$NVMF_PORT", 00:16:38.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:38.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:38.892 "hdgst": ${hdgst:-false}, 00:16:38.892 "ddgst": ${ddgst:-false} 00:16:38.892 }, 00:16:38.892 "method": "bdev_nvme_attach_controller" 00:16:38.892 } 00:16:38.892 EOF 00:16:38.892 )") 00:16:38.892 [2024-07-15 23:52:53.936938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.892 [2024-07-15 23:52:53.936968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:38.892 23:52:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:38.892 "params": { 00:16:38.892 "name": "Nvme1", 00:16:38.892 "trtype": "tcp", 00:16:38.892 "traddr": "10.0.0.2", 00:16:38.892 "adrfam": "ipv4", 00:16:38.892 "trsvcid": "4420", 00:16:38.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:38.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:38.893 "hdgst": false, 00:16:38.893 "ddgst": false 00:16:38.893 }, 00:16:38.893 "method": "bdev_nvme_attach_controller" 00:16:38.893 }' 00:16:38.893 [2024-07-15 23:52:53.948932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:53.948941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:53.960960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:53.960968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:53.972989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:53.972998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:53.985020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:53.985028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:53.988101] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:16:38.893 [2024-07-15 23:52:53.988156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428824 ] 00:16:38.893 [2024-07-15 23:52:53.997050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:53.997058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:54.009081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:54.009090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:54.021111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:54.021120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:54.033143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:54.033151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:54.045172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:54.045180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:54.053673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.893 [2024-07-15 23:52:54.057204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:54.057212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:54.069240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:54.069249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.893 [2024-07-15 23:52:54.081269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.893 [2024-07-15 23:52:54.081278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.093298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.093312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.105328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.105338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.117357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.117366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.118393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.153 [2024-07-15 23:52:54.129389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.129399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.141425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.141440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.153451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.153461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.165481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.165490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.177511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.177519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.189554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.189568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.201573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.201583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.213605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.213616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.225638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.225649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.237668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.237676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.249702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.249710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.261732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.261740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.153 [2024-07-15 23:52:54.273765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.153 [2024-07-15 23:52:54.273775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.154 [2024-07-15 23:52:54.285796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.154 [2024-07-15 23:52:54.285805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.154 [2024-07-15 23:52:54.297829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.154 [2024-07-15 23:52:54.297837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.154 [2024-07-15 23:52:54.309861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.154 [2024-07-15 23:52:54.309875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.154 [2024-07-15 23:52:54.321894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.154 [2024-07-15 23:52:54.321903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.154 [2024-07-15 23:52:54.333925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.154 [2024-07-15 23:52:54.333933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.345957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.345966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.357989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.357998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.370483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.370497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 Running I/O for 5 seconds... 00:16:39.414 [2024-07-15 23:52:54.382058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.382069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.396592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.396609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.410163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.410180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.423127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.423145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.436545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.436564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.449302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.449319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.463041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.463057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.475917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.475932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.489450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.489467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.502009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.502025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.514940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.514956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.527626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.527641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.540654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.540670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.553661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.553677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.566501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.566517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.579412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.579428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.414 [2024-07-15 23:52:54.592594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.414 [2024-07-15 23:52:54.592610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.605343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.605359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.618502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.618517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.631635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.631652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.645247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.645262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.659039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.659054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.671437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.671453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.684236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.684251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.698042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.698057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.710573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.710588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.723415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.723431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.736275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.736290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.749610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.749625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.762493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.762507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.775060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.775075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.787716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.787731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.800664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.800680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.814119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.814133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.827640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.827655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.841165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.841180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.675 [2024-07-15 23:52:54.853509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.675 [2024-07-15 23:52:54.853523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.866561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.866576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.879973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.879988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.892596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.892611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.905198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.905213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.918805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.918820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.932148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.932164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.945725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.945740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.958543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.958558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.971788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.971803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.984864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.984879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:54.997660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:54.997676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:55.011431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:55.011446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:55.024548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:55.024564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:55.037749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:55.037764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:55.050354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:55.050369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:55.064038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:55.064054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:55.077059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:55.077074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:55.089949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:55.089964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:55.102657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:55.102672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.936 [2024-07-15 23:52:55.115747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.936 [2024-07-15 23:52:55.115763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.128416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.128431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.141821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.141836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.154180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.154196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.166962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.166977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.179842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.179857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.192150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.192165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.205970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.205985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.219244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.219260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.232471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.232487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.245737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.245752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.259299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.259314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.271824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.271839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.285111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.285130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.298543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.298558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.311507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.311522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.324335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.324351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.336783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.336799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.349790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.349804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.362765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.362780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.197 [2024-07-15 23:52:55.375632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.197 [2024-07-15 23:52:55.375647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.388503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.388519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.401547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.401563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.414265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.414280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.427337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.427352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.440216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.440235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.453530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.453545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.466657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.466673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.479960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.479976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.493341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.493357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.506567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.506582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.520000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.520016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.533467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.533486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.547068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.547084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.560729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.560746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.574046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.574061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.586481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.586497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.599824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.599839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.612869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.612884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.625912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.625928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.458 [2024-07-15 23:52:55.639247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.458 [2024-07-15 23:52:55.639263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.652973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.652989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.666597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.666612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.680243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.680258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.693224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.693245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.705966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.705981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.718922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.718937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.731446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.731462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.744726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.744742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.757215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.757236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.770161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.770177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.782817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.782837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.795770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.795786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.808615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.808631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.821288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.821304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.834572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.834588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.847730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.847746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.860642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.860658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.873588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.873603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.886305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.886320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.719 [2024-07-15 23:52:55.899443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.719 [2024-07-15 23:52:55.899459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.980 [2024-07-15 23:52:55.912087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.980 [2024-07-15 23:52:55.912103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.980 [2024-07-15 23:52:55.924913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.980 [2024-07-15 23:52:55.924929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.980 [2024-07-15 23:52:55.937968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.980 [2024-07-15 23:52:55.937984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:55.951837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:55.951853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:55.965441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:55.965457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:55.979168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:55.979184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:55.992754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:55.992770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.005863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.005879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.018661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.018677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.032247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.032266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.045514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.045530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.059046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.059062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.072605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.072621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.085983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.085999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.099708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.099724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.113217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.113237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.126789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.126804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.139440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.139456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.152964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.152979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.981 [2024-07-15 23:52:56.166025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.981 [2024-07-15 23:52:56.166041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.241 [2024-07-15 23:52:56.179383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.241 [2024-07-15 23:52:56.179399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.241 [2024-07-15 23:52:56.192999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.241 [2024-07-15 23:52:56.193014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.241 [2024-07-15 23:52:56.206126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.241 [2024-07-15 23:52:56.206140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.241 [2024-07-15 23:52:56.218871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.241 [2024-07-15 23:52:56.218886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.232251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.232268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.245309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.245325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.258593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.258608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.271507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.271522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.284908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.284924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.298132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.298147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.311287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.311302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.323919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.323934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.337009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.337024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.349762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.349776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.363401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.363416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.376235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.376250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.388987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.389002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.401887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.401902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.414616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.414632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.242 [2024-07-15 23:52:56.427747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.242 [2024-07-15 23:52:56.427762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.440392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.440408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.453668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.453683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.466788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.466803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.479992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.480008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.493273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.493289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.506154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.506169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.519480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.519495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.532319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.532334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.545722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.545737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.558756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.558771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.571438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.571454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.584084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.584099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.597184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.597198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.610455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.610471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.623213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.623228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.636026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.636041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.649064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.649080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.662239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.662254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.675341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.675356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.503 [2024-07-15 23:52:56.688852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.503 [2024-07-15 23:52:56.688867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.702098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.702113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.714569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.714584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.728124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.728139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.740868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.740883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.754145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.754161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.767517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.767532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.780922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.780937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.793811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.793825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.806866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.806881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.820474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.820488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.833554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.833569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.846555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.846570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.860146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.860161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.873774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.873790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.886553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.886568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.900064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.900080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.913101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.913116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.926069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.926084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.939298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.939313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.764 [2024-07-15 23:52:56.952564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.764 [2024-07-15 23:52:56.952580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:56.966581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:56.966597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:56.979751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:56.979766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:56.993001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:56.993016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.006618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.006634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.020073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.020088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.033264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.033279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.045994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.046010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.059516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.059531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.072955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.072970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.086399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.086414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.099597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.099612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.111974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.111989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.125374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.125390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.138470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.138485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.151103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.151118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.024 [2024-07-15 23:52:57.163984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.024 [2024-07-15 23:52:57.163999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.025 [2024-07-15 23:52:57.176765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.025 [2024-07-15 23:52:57.176781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.025 [2024-07-15 23:52:57.189073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.025 [2024-07-15 23:52:57.189089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.025 [2024-07-15 23:52:57.202378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.025 [2024-07-15 23:52:57.202393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.215070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.215086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.228165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.228181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.240651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.240666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.253888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.253904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.267227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.267251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.279955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.279972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.293308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.293324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.307004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.307020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.320282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.320298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.333435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.333450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.346937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.346952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.360043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.360059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.373050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.373065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.385690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.385706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.398968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.398984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.412682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.412698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.426152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.426167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.439428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.439443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.452238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.452254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.285 [2024-07-15 23:52:57.465164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.285 [2024-07-15 23:52:57.465180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.478266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.478281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.490955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.490970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.503756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.503771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.516630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.516650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.529299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.529315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.542529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.542545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.555518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.555533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.569109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.569125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.582461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.582477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.595908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.595925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.608753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.608769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.621669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.621686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.634907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.634923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.647460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.647476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.660371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.660387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.672992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.673008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.685896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.685911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.699518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.699534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.712500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.712516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.546 [2024-07-15 23:52:57.726020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.546 [2024-07-15 23:52:57.726035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.739018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.739034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.751719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.751735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.765176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.765195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.778500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.778516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.792096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.792111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.805499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.805515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.819177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.819193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.831867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.831883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.844888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.844903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.857347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.857362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.871012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.871028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.884875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.884891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.898222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.898243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.910900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.910914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.923649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.923665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.935933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.935948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.949197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.949211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.962709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.806 [2024-07-15 23:52:57.962724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.806 [2024-07-15 23:52:57.975146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.807 [2024-07-15 23:52:57.975161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.807 [2024-07-15 23:52:57.987800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.807 [2024-07-15 23:52:57.987817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.000917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.000933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.014058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.014077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.027450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.027465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.041331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.041346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.054884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.054899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.068284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.068300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.081868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.081884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.094549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.094564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.107885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.107901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.120505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.120520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.133589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.133604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.147450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.147465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.160164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.160179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.173250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.173265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.186194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.186210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.200308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.200323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.213570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.213585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.226592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.226606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.239444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.239459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.068 [2024-07-15 23:52:58.252774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.068 [2024-07-15 23:52:58.252789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.266049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.266064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.279345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.279360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.292800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.292817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.306566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.306581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.319449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.319464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.333033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.333049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.346441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.346457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.359183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.359198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.372012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.372027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.384883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.384898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.398914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.398929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.411830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.411845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.424626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.424642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.437982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.437997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.451574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.451588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.465249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.465264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.478429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.478444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.491096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.491112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.504003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.504018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.329 [2024-07-15 23:52:58.516877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.329 [2024-07-15 23:52:58.516892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.589 [2024-07-15 23:52:58.529925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.589 [2024-07-15 23:52:58.529941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.589 [2024-07-15 23:52:58.543181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.589 [2024-07-15 23:52:58.543196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.589 [2024-07-15 23:52:58.555695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.589 [2024-07-15 23:52:58.555710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.589 [2024-07-15 23:52:58.569287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.589 [2024-07-15 23:52:58.569302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.589 [2024-07-15 23:52:58.581963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.589 [2024-07-15 23:52:58.581978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.589 [2024-07-15 23:52:58.594779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.594794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.607199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.607215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.619746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.619761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.632890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.632905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.646502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.646518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.659276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.659290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.672161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.672176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.685448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.685463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.698923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.698938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.711870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.711884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.725494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.725509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.738993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.739008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.752214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.752228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.765485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.765500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.590 [2024-07-15 23:52:58.778376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.590 [2024-07-15 23:52:58.778392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.790991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.791007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.804754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.804769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.817444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.817459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.830145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.830160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.843372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.843387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.857062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.857077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.870606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.870621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.883608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.883623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.896490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.896505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.909825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.909841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.923246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.923262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.936153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.936169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.949576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.949592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.963152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.963168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.976279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.976295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:58.989509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:58.989524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:59.002509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:59.002525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:59.015013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:59.015028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.849 [2024-07-15 23:52:59.028256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.849 [2024-07-15 23:52:59.028274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.042063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.042079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.055307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.055323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.068492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.068508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.082136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.082152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.095764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.095780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.109271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.109287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.122666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.122682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.135868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.135884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.149410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.149426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.163047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.163062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.176198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.176214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.189246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.189261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.202974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.202989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.215793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.215809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.228295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.228310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.241334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.241350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.254041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.254060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.267546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.267561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.281112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.281127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.109 [2024-07-15 23:52:59.293881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.109 [2024-07-15 23:52:59.293896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.368 [2024-07-15 23:52:59.306717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.368 [2024-07-15 23:52:59.306733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.368 [2024-07-15 23:52:59.319684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.368 [2024-07-15 23:52:59.319700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.368 [2024-07-15 23:52:59.333226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.368 [2024-07-15 23:52:59.333246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.368 [2024-07-15 23:52:59.346623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.368 [2024-07-15 23:52:59.346639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.368 [2024-07-15 23:52:59.359370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.368 [2024-07-15 23:52:59.359385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.372653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.372669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.385927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.385942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.398183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.398198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 00:16:44.369 Latency(us) 00:16:44.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.369 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:44.369 Nvme1n1 : 5.01 19092.60 149.16 0.00 0.00 6696.85 2812.59 18459.31 00:16:44.369 =================================================================================================================== 00:16:44.369 Total : 19092.60 149.16 0.00 0.00 6696.85 2812.59 18459.31 00:16:44.369 [2024-07-15 23:52:59.407682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.407696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.419715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.419728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.431746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.431758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.443776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.443787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.455805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.455821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.467831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.467840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.479859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.479868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.491893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.491905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.503932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.503941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.515954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.515965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 [2024-07-15 23:52:59.527981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:44.369 [2024-07-15 23:52:59.527990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:44.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (428824) - No such process 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 428824 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:44.369 delay0 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:44.369 23:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:44.628 23:52:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:44.628 23:52:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:44.628 [2024-07-15 23:52:59.669759] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:51.290 Initializing NVMe Controllers 00:16:51.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:51.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:51.290 Initialization complete. Launching workers. 00:16:51.290 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 177 00:16:51.290 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 461, failed to submit 36 00:16:51.290 success 292, unsuccess 169, failed 0 00:16:51.290 23:53:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:51.290 23:53:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:51.290 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.291 rmmod nvme_tcp 00:16:51.291 rmmod nvme_fabrics 00:16:51.291 rmmod nvme_keyring 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 426715 ']' 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 426715 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@942 -- # '[' -z 426715 ']' 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # kill -0 426715 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # uname 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 426715 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@960 -- # echo 'killing process with pid 426715' 00:16:51.291 killing process with pid 426715 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@961 -- # kill 426715 00:16:51.291 23:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # wait 426715 00:16:51.291 23:53:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.291 23:53:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.291 23:53:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.291 23:53:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.291 23:53:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.291 23:53:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.291 23:53:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.291 23:53:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.207 23:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:53.207 00:16:53.207 real 0m33.922s 00:16:53.207 user 0m45.008s 00:16:53.207 sys 0m10.530s 00:16:53.207 23:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:53.207 23:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.207 ************************************ 00:16:53.207 END TEST nvmf_zcopy 00:16:53.207 ************************************ 00:16:53.207 23:53:08 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:16:53.207 23:53:08 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:53.207 23:53:08 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:16:53.207 23:53:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:53.207 23:53:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:53.207 ************************************ 00:16:53.207 START TEST nvmf_nmic 00:16:53.207 ************************************ 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:53.207 * Looking for test storage... 00:16:53.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:53.207 23:53:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:01.352 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.352 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:01.353 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:01.353 Found net devices under 0000:31:00.0: cvl_0_0 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:01.353 Found net devices under 0000:31:00.1: cvl_0_1 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:17:01.353 00:17:01.353 --- 10.0.0.2 ping statistics --- 00:17:01.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.353 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:17:01.353 00:17:01.353 --- 10.0.0.1 ping statistics --- 00:17:01.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.353 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=435793 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 435793 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@823 -- # '[' -z 435793 ']' 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:01.353 23:53:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:01.353 [2024-07-15 23:53:16.480470] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:17:01.353 [2024-07-15 23:53:16.480538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.614 [2024-07-15 23:53:16.560968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:01.614 [2024-07-15 23:53:16.637673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.614 [2024-07-15 23:53:16.637711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.614 [2024-07-15 23:53:16.637719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.614 [2024-07-15 23:53:16.637726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.614 [2024-07-15 23:53:16.637732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.614 [2024-07-15 23:53:16.637902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.614 [2024-07-15 23:53:16.638030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.615 [2024-07-15 23:53:16.638063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.615 [2024-07-15 23:53:16.638062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # return 0 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 [2024-07-15 23:53:17.299765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 Malloc0 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 [2024-07-15 23:53:17.359211] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:02.186 test case1: single bdev can't be used in multiple subsystems 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.186 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.446 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:02.446 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.446 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.446 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.446 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:02.446 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.447 [2024-07-15 23:53:17.395146] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:02.447 [2024-07-15 23:53:17.395165] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:02.447 [2024-07-15 23:53:17.395172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:02.447 request: 00:17:02.447 { 00:17:02.447 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:02.447 "namespace": { 00:17:02.447 "bdev_name": "Malloc0", 00:17:02.447 "no_auto_visible": false 00:17:02.447 }, 00:17:02.447 "method": "nvmf_subsystem_add_ns", 00:17:02.447 "req_id": 1 00:17:02.447 } 00:17:02.447 Got JSON-RPC error response 00:17:02.447 response: 00:17:02.447 { 00:17:02.447 "code": -32602, 00:17:02.447 "message": "Invalid parameters" 00:17:02.447 } 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:02.447 Adding namespace failed - expected result. 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:02.447 test case2: host connect to nvmf target in multiple paths 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:02.447 [2024-07-15 23:53:17.407283] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.447 23:53:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:03.828 23:53:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:05.217 23:53:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:05.475 23:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1192 -- # local i=0 00:17:05.476 23:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.476 23:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:17:05.476 23:53:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # sleep 2 00:17:07.387 23:53:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:17:07.387 23:53:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:17:07.387 23:53:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.387 23:53:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:17:07.387 23:53:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.387 23:53:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # return 0 00:17:07.387 23:53:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:07.387 [global] 00:17:07.387 thread=1 00:17:07.387 invalidate=1 00:17:07.387 rw=write 00:17:07.387 time_based=1 00:17:07.387 runtime=1 00:17:07.387 ioengine=libaio 00:17:07.387 direct=1 00:17:07.387 bs=4096 00:17:07.387 iodepth=1 00:17:07.387 norandommap=0 00:17:07.387 numjobs=1 00:17:07.387 00:17:07.387 verify_dump=1 00:17:07.387 verify_backlog=512 00:17:07.387 verify_state_save=0 00:17:07.387 do_verify=1 00:17:07.387 verify=crc32c-intel 00:17:07.387 [job0] 00:17:07.387 filename=/dev/nvme0n1 00:17:07.387 Could not set queue depth (nvme0n1) 00:17:07.648 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:07.648 fio-3.35 00:17:07.648 Starting 1 thread 00:17:09.030 00:17:09.030 job0: (groupid=0, jobs=1): err= 0: pid=437322: Mon Jul 15 23:53:23 2024 00:17:09.030 read: IOPS=499, BW=1998KiB/s (2046kB/s)(2000KiB/1001msec) 00:17:09.030 slat (nsec): min=7424, max=62769, avg=27046.15, stdev=3955.40 00:17:09.030 clat (usec): min=910, max=1671, avg=1190.05, stdev=71.87 00:17:09.030 lat (usec): min=936, max=1698, avg=1217.10, stdev=72.03 00:17:09.030 clat percentiles (usec): 00:17:09.030 | 1.00th=[ 996], 5.00th=[ 1074], 10.00th=[ 1106], 20.00th=[ 1139], 00:17:09.030 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:17:09.030 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1270], 95.00th=[ 1287], 00:17:09.030 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1680], 99.95th=[ 1680], 00:17:09.030 | 99.99th=[ 1680] 00:17:09.030 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:09.030 slat (usec): min=9, max=26746, avg=81.94, stdev=1180.78 00:17:09.030 clat (usec): min=341, max=884, avg=666.29, stdev=91.96 00:17:09.030 lat (usec): min=351, max=27544, avg=748.23, stdev=1190.58 00:17:09.031 clat percentiles (usec): 00:17:09.031 | 1.00th=[ 429], 5.00th=[ 494], 10.00th=[ 529], 20.00th=[ 594], 00:17:09.031 | 30.00th=[ 627], 40.00th=[ 660], 50.00th=[ 676], 60.00th=[ 701], 00:17:09.031 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 799], 00:17:09.031 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 889], 99.95th=[ 889], 00:17:09.031 | 99.99th=[ 889] 00:17:09.031 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:09.031 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:09.031 lat (usec) : 500=2.77%, 750=39.43%, 1000=8.89% 00:17:09.031 lat (msec) : 2=48.91% 00:17:09.031 cpu : usr=2.00%, sys=4.00%, ctx=1015, majf=0, minf=1 00:17:09.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:09.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.031 issued rwts: total=500,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.031 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:09.031 00:17:09.031 Run status group 0 (all jobs): 00:17:09.031 READ: bw=1998KiB/s (2046kB/s), 1998KiB/s-1998KiB/s (2046kB/s-2046kB/s), io=2000KiB (2048kB), run=1001-1001msec 00:17:09.031 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:17:09.031 00:17:09.031 Disk stats (read/write): 00:17:09.031 nvme0n1: ios=442/512, merge=0/0, ticks=1422/302, in_queue=1724, util=98.80% 00:17:09.031 23:53:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1213 -- # local i=0 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1225 -- # return 0 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.031 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.031 rmmod nvme_tcp 00:17:09.031 rmmod nvme_fabrics 00:17:09.031 rmmod nvme_keyring 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 435793 ']' 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 435793 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@942 -- # '[' -z 435793 ']' 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # kill -0 435793 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # uname 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 435793 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@960 -- # echo 'killing process with pid 435793' 00:17:09.291 killing process with pid 435793 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@961 -- # kill 435793 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # wait 435793 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.291 23:53:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.835 23:53:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:11.835 00:17:11.835 real 0m18.349s 00:17:11.835 user 0m45.622s 00:17:11.835 sys 0m6.816s 00:17:11.835 23:53:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:11.835 23:53:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:11.835 ************************************ 00:17:11.835 END TEST nvmf_nmic 00:17:11.835 ************************************ 00:17:11.835 23:53:26 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:17:11.835 23:53:26 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:11.835 23:53:26 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:17:11.835 23:53:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:11.835 23:53:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:11.835 ************************************ 00:17:11.835 START TEST nvmf_fio_target 00:17:11.835 ************************************ 00:17:11.835 23:53:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:11.835 * Looking for test storage... 00:17:11.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.835 23:53:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.835 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:11.835 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.835 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.835 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.835 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.836 23:53:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:19.973 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:19.974 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:19.974 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:19.974 Found net devices under 0000:31:00.0: cvl_0_0 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:19.974 Found net devices under 0000:31:00.1: cvl_0_1 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:19.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:17:19.974 00:17:19.974 --- 10.0.0.2 ping statistics --- 00:17:19.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.974 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:17:19.974 00:17:19.974 --- 10.0.0.1 ping statistics --- 00:17:19.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.974 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=442323 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 442323 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@823 -- # '[' -z 442323 ']' 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:19.974 23:53:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.974 [2024-07-15 23:53:34.884401] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:17:19.974 [2024-07-15 23:53:34.884451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.974 [2024-07-15 23:53:34.957173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.974 [2024-07-15 23:53:35.022728] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.974 [2024-07-15 23:53:35.022762] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.974 [2024-07-15 23:53:35.022769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.974 [2024-07-15 23:53:35.022776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.974 [2024-07-15 23:53:35.022782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.974 [2024-07-15 23:53:35.022924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.974 [2024-07-15 23:53:35.023034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.974 [2024-07-15 23:53:35.023187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.974 [2024-07-15 23:53:35.023188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.545 23:53:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:20.545 23:53:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # return 0 00:17:20.545 23:53:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.545 23:53:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.545 23:53:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.545 23:53:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.545 23:53:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:20.805 [2024-07-15 23:53:35.834322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.805 23:53:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:21.065 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:21.065 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:21.065 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:21.065 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:21.326 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:21.326 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:21.587 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:21.587 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:21.587 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:21.848 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:21.848 23:53:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:22.108 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:22.108 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:22.108 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:22.108 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:22.369 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:22.630 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:22.630 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.630 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:22.630 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:22.890 23:53:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.890 [2024-07-15 23:53:38.068269] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.151 23:53:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:23.151 23:53:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:23.411 23:53:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.796 23:53:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:24.796 23:53:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1192 -- # local i=0 00:17:24.796 23:53:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:17:24.796 23:53:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # [[ -n 4 ]] 00:17:24.796 23:53:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # nvme_device_counter=4 00:17:24.796 23:53:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # sleep 2 00:17:26.711 23:53:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:17:26.711 23:53:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:17:26.711 23:53:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:17:26.972 23:53:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_devices=4 00:17:26.972 23:53:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.972 23:53:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # return 0 00:17:26.972 23:53:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:26.972 [global] 00:17:26.972 thread=1 00:17:26.972 invalidate=1 00:17:26.972 rw=write 00:17:26.972 time_based=1 00:17:26.972 runtime=1 00:17:26.972 ioengine=libaio 00:17:26.972 direct=1 00:17:26.972 bs=4096 00:17:26.972 iodepth=1 00:17:26.972 norandommap=0 00:17:26.972 numjobs=1 00:17:26.972 00:17:26.972 verify_dump=1 00:17:26.972 verify_backlog=512 00:17:26.972 verify_state_save=0 00:17:26.972 do_verify=1 00:17:26.972 verify=crc32c-intel 00:17:26.972 [job0] 00:17:26.972 filename=/dev/nvme0n1 00:17:26.972 [job1] 00:17:26.972 filename=/dev/nvme0n2 00:17:26.972 [job2] 00:17:26.972 filename=/dev/nvme0n3 00:17:26.972 [job3] 00:17:26.972 filename=/dev/nvme0n4 00:17:26.972 Could not set queue depth (nvme0n1) 00:17:26.972 Could not set queue depth (nvme0n2) 00:17:26.972 Could not set queue depth (nvme0n3) 00:17:26.972 Could not set queue depth (nvme0n4) 00:17:27.232 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:27.232 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:27.232 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:27.232 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:27.232 fio-3.35 00:17:27.232 Starting 4 threads 00:17:28.617 00:17:28.618 job0: (groupid=0, jobs=1): err= 0: pid=443933: Mon Jul 15 23:53:43 2024 00:17:28.618 read: IOPS=43, BW=175KiB/s (179kB/s)(176KiB/1008msec) 00:17:28.618 slat (nsec): min=7922, max=27024, avg=24276.82, stdev=2576.83 00:17:28.618 clat (usec): min=699, max=41466, avg=16381.58, stdev=19750.37 00:17:28.618 lat (usec): min=723, max=41491, avg=16405.86, stdev=19750.59 00:17:28.618 clat percentiles (usec): 00:17:28.618 | 1.00th=[ 701], 5.00th=[ 807], 10.00th=[ 816], 20.00th=[ 832], 00:17:28.618 | 30.00th=[ 889], 40.00th=[ 930], 50.00th=[ 963], 60.00th=[ 1074], 00:17:28.618 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:28.618 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:28.618 | 99.99th=[41681] 00:17:28.618 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:17:28.618 slat (nsec): min=3147, max=55300, avg=28999.12, stdev=8233.03 00:17:28.618 clat (usec): min=204, max=840, avg=522.80, stdev=109.86 00:17:28.618 lat (usec): min=215, max=871, avg=551.80, stdev=113.46 00:17:28.618 clat percentiles (usec): 00:17:28.618 | 1.00th=[ 277], 5.00th=[ 322], 10.00th=[ 379], 20.00th=[ 424], 00:17:28.618 | 30.00th=[ 461], 40.00th=[ 502], 50.00th=[ 529], 60.00th=[ 553], 00:17:28.618 | 70.00th=[ 594], 80.00th=[ 627], 90.00th=[ 660], 95.00th=[ 685], 00:17:28.618 | 99.00th=[ 725], 99.50th=[ 783], 99.90th=[ 840], 99.95th=[ 840], 00:17:28.618 | 99.99th=[ 840] 00:17:28.618 bw ( KiB/s): min= 4096, max= 4096, per=45.25%, avg=4096.00, stdev= 0.00, samples=1 00:17:28.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:28.618 lat (usec) : 250=0.54%, 500=35.97%, 750=55.04%, 1000=4.68% 00:17:28.618 lat (msec) : 2=0.72%, 50=3.06% 00:17:28.618 cpu : usr=0.79%, sys=1.49%, ctx=556, majf=0, minf=1 00:17:28.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:28.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.618 issued rwts: total=44,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:28.618 job1: (groupid=0, jobs=1): err= 0: pid=443944: Mon Jul 15 23:53:43 2024 00:17:28.618 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:28.618 slat (nsec): min=6781, max=59558, avg=24722.48, stdev=4585.75 00:17:28.618 clat (usec): min=711, max=41962, avg=1187.31, stdev=3146.99 00:17:28.618 lat (usec): min=720, max=41989, avg=1212.03, stdev=3147.07 00:17:28.618 clat percentiles (usec): 00:17:28.618 | 1.00th=[ 750], 5.00th=[ 807], 10.00th=[ 840], 20.00th=[ 865], 00:17:28.618 | 30.00th=[ 889], 40.00th=[ 914], 50.00th=[ 930], 60.00th=[ 947], 00:17:28.618 | 70.00th=[ 963], 80.00th=[ 988], 90.00th=[ 1020], 95.00th=[ 1057], 00:17:28.618 | 99.00th=[ 1139], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:17:28.618 | 99.99th=[42206] 00:17:28.618 write: IOPS=700, BW=2801KiB/s (2868kB/s)(2804KiB/1001msec); 0 zone resets 00:17:28.618 slat (nsec): min=3110, max=59344, avg=26131.56, stdev=10963.80 00:17:28.618 clat (usec): min=164, max=1523, avg=502.96, stdev=90.54 00:17:28.618 lat (usec): min=171, max=1533, avg=529.09, stdev=92.58 00:17:28.618 clat percentiles (usec): 00:17:28.618 | 1.00th=[ 314], 5.00th=[ 359], 10.00th=[ 404], 20.00th=[ 437], 00:17:28.618 | 30.00th=[ 461], 40.00th=[ 486], 50.00th=[ 510], 60.00th=[ 529], 00:17:28.618 | 70.00th=[ 545], 80.00th=[ 570], 90.00th=[ 594], 95.00th=[ 619], 00:17:28.618 | 99.00th=[ 693], 99.50th=[ 742], 99.90th=[ 1516], 99.95th=[ 1516], 00:17:28.618 | 99.99th=[ 1516] 00:17:28.618 bw ( KiB/s): min= 4096, max= 4096, per=45.25%, avg=4096.00, stdev= 0.00, samples=1 00:17:28.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:28.618 lat (usec) : 250=0.33%, 500=26.30%, 750=31.33%, 1000=35.61% 00:17:28.618 lat (msec) : 2=6.10%, 20=0.08%, 50=0.25% 00:17:28.618 cpu : usr=2.10%, sys=2.90%, ctx=1213, majf=0, minf=1 00:17:28.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:28.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.618 issued rwts: total=512,701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:28.618 job2: (groupid=0, jobs=1): err= 0: pid=443951: Mon Jul 15 23:53:43 2024 00:17:28.618 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:28.618 slat (nsec): min=7662, max=61753, avg=26732.41, stdev=4064.17 00:17:28.618 clat (usec): min=790, max=1394, avg=1149.86, stdev=83.65 00:17:28.618 lat (usec): min=817, max=1420, avg=1176.59, stdev=83.67 00:17:28.618 clat percentiles (usec): 00:17:28.618 | 1.00th=[ 881], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1106], 00:17:28.618 | 30.00th=[ 1123], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1172], 00:17:28.618 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1270], 00:17:28.618 | 99.00th=[ 1352], 99.50th=[ 1385], 99.90th=[ 1401], 99.95th=[ 1401], 00:17:28.618 | 99.99th=[ 1401] 00:17:28.618 write: IOPS=555, BW=2222KiB/s (2275kB/s)(2224KiB/1001msec); 0 zone resets 00:17:28.618 slat (nsec): min=8951, max=69292, avg=29916.89, stdev=9400.74 00:17:28.618 clat (usec): min=321, max=998, avg=669.57, stdev=130.37 00:17:28.618 lat (usec): min=331, max=1031, avg=699.49, stdev=133.68 00:17:28.618 clat percentiles (usec): 00:17:28.618 | 1.00th=[ 343], 5.00th=[ 437], 10.00th=[ 486], 20.00th=[ 562], 00:17:28.618 | 30.00th=[ 611], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 709], 00:17:28.618 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 840], 95.00th=[ 881], 00:17:28.618 | 99.00th=[ 930], 99.50th=[ 930], 99.90th=[ 996], 99.95th=[ 996], 00:17:28.618 | 99.99th=[ 996] 00:17:28.618 bw ( KiB/s): min= 4096, max= 4096, per=45.25%, avg=4096.00, stdev= 0.00, samples=1 00:17:28.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:28.618 lat (usec) : 500=5.81%, 750=31.55%, 1000=17.04% 00:17:28.618 lat (msec) : 2=45.60% 00:17:28.618 cpu : usr=2.20%, sys=4.10%, ctx=1068, majf=0, minf=1 00:17:28.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:28.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.618 issued rwts: total=512,556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:28.618 job3: (groupid=0, jobs=1): err= 0: pid=443957: Mon Jul 15 23:53:43 2024 00:17:28.618 read: IOPS=17, BW=71.9KiB/s (73.7kB/s)(72.0KiB/1001msec) 00:17:28.618 slat (nsec): min=26986, max=42768, avg=28223.94, stdev=3643.04 00:17:28.618 clat (usec): min=950, max=42074, avg=37215.34, stdev=13159.45 00:17:28.618 lat (usec): min=978, max=42101, avg=37243.56, stdev=13156.84 00:17:28.618 clat percentiles (usec): 00:17:28.618 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[ 1172], 20.00th=[41157], 00:17:28.618 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:17:28.618 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:28.618 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:28.618 | 99.99th=[42206] 00:17:28.618 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:28.618 slat (usec): min=9, max=2604, avg=36.86, stdev=115.55 00:17:28.618 clat (usec): min=279, max=1042, avg=602.60, stdev=129.04 00:17:28.618 lat (usec): min=315, max=3113, avg=639.46, stdev=172.85 00:17:28.618 clat percentiles (usec): 00:17:28.618 | 1.00th=[ 343], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 486], 00:17:28.618 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 635], 00:17:28.618 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 832], 00:17:28.618 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 1045], 99.95th=[ 1045], 00:17:28.618 | 99.99th=[ 1045] 00:17:28.618 bw ( KiB/s): min= 4096, max= 4096, per=45.25%, avg=4096.00, stdev= 0.00, samples=1 00:17:28.618 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:28.618 lat (usec) : 500=22.26%, 750=61.32%, 1000=13.02% 00:17:28.618 lat (msec) : 2=0.38%, 50=3.02% 00:17:28.618 cpu : usr=1.10%, sys=1.90%, ctx=534, majf=0, minf=1 00:17:28.618 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:28.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.618 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.618 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:28.618 00:17:28.618 Run status group 0 (all jobs): 00:17:28.618 READ: bw=4310KiB/s (4413kB/s), 71.9KiB/s-2046KiB/s (73.7kB/s-2095kB/s), io=4344KiB (4448kB), run=1001-1008msec 00:17:28.618 WRITE: bw=9052KiB/s (9269kB/s), 2032KiB/s-2801KiB/s (2081kB/s-2868kB/s), io=9124KiB (9343kB), run=1001-1008msec 00:17:28.618 00:17:28.618 Disk stats (read/write): 00:17:28.618 nvme0n1: ios=83/512, merge=0/0, ticks=637/248, in_queue=885, util=90.58% 00:17:28.618 nvme0n2: ios=478/512, merge=0/0, ticks=789/253, in_queue=1042, util=91.62% 00:17:28.618 nvme0n3: ios=396/512, merge=0/0, ticks=419/266, in_queue=685, util=88.45% 00:17:28.618 nvme0n4: ios=70/512, merge=0/0, ticks=659/260, in_queue=919, util=96.46% 00:17:28.618 23:53:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:28.618 [global] 00:17:28.618 thread=1 00:17:28.618 invalidate=1 00:17:28.618 rw=randwrite 00:17:28.618 time_based=1 00:17:28.618 runtime=1 00:17:28.618 ioengine=libaio 00:17:28.618 direct=1 00:17:28.618 bs=4096 00:17:28.618 iodepth=1 00:17:28.618 norandommap=0 00:17:28.618 numjobs=1 00:17:28.618 00:17:28.618 verify_dump=1 00:17:28.618 verify_backlog=512 00:17:28.618 verify_state_save=0 00:17:28.618 do_verify=1 00:17:28.618 verify=crc32c-intel 00:17:28.618 [job0] 00:17:28.618 filename=/dev/nvme0n1 00:17:28.618 [job1] 00:17:28.618 filename=/dev/nvme0n2 00:17:28.618 [job2] 00:17:28.618 filename=/dev/nvme0n3 00:17:28.618 [job3] 00:17:28.618 filename=/dev/nvme0n4 00:17:28.618 Could not set queue depth (nvme0n1) 00:17:28.618 Could not set queue depth (nvme0n2) 00:17:28.618 Could not set queue depth (nvme0n3) 00:17:28.618 Could not set queue depth (nvme0n4) 00:17:28.878 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:28.878 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:28.878 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:28.878 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:28.878 fio-3.35 00:17:28.878 Starting 4 threads 00:17:30.260 00:17:30.260 job0: (groupid=0, jobs=1): err= 0: pid=444451: Mon Jul 15 23:53:45 2024 00:17:30.260 read: IOPS=17, BW=71.6KiB/s (73.3kB/s)(72.0KiB/1006msec) 00:17:30.260 slat (nsec): min=23931, max=24917, avg=24158.72, stdev=235.60 00:17:30.260 clat (usec): min=1045, max=42120, avg=39282.03, stdev=9553.75 00:17:30.260 lat (usec): min=1069, max=42144, avg=39306.19, stdev=9553.71 00:17:30.260 clat percentiles (usec): 00:17:30.260 | 1.00th=[ 1045], 5.00th=[ 1045], 10.00th=[41157], 20.00th=[41157], 00:17:30.260 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:17:30.260 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:30.260 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:30.260 | 99.99th=[42206] 00:17:30.260 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:17:30.260 slat (nsec): min=9041, max=48944, avg=26509.19, stdev=8374.61 00:17:30.260 clat (usec): min=279, max=814, avg=548.76, stdev=109.55 00:17:30.260 lat (usec): min=297, max=837, avg=575.27, stdev=112.00 00:17:30.260 clat percentiles (usec): 00:17:30.260 | 1.00th=[ 293], 5.00th=[ 334], 10.00th=[ 412], 20.00th=[ 457], 00:17:30.260 | 30.00th=[ 506], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 570], 00:17:30.260 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 685], 95.00th=[ 725], 00:17:30.260 | 99.00th=[ 783], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:17:30.260 | 99.99th=[ 816] 00:17:30.260 bw ( KiB/s): min= 4096, max= 4096, per=47.61%, avg=4096.00, stdev= 0.00, samples=1 00:17:30.260 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:30.260 lat (usec) : 500=27.92%, 750=65.66%, 1000=3.02% 00:17:30.260 lat (msec) : 2=0.19%, 50=3.21% 00:17:30.260 cpu : usr=1.00%, sys=1.09%, ctx=530, majf=0, minf=1 00:17:30.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:30.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.260 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:30.260 job1: (groupid=0, jobs=1): err= 0: pid=444459: Mon Jul 15 23:53:45 2024 00:17:30.260 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:30.260 slat (nsec): min=7002, max=58943, avg=24759.73, stdev=3064.78 00:17:30.260 clat (usec): min=822, max=1266, avg=1059.46, stdev=73.51 00:17:30.260 lat (usec): min=847, max=1290, avg=1084.22, stdev=73.48 00:17:30.260 clat percentiles (usec): 00:17:30.260 | 1.00th=[ 848], 5.00th=[ 930], 10.00th=[ 971], 20.00th=[ 1004], 00:17:30.260 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1057], 60.00th=[ 1074], 00:17:30.260 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1172], 00:17:30.260 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1270], 00:17:30.260 | 99.99th=[ 1270] 00:17:30.260 write: IOPS=601, BW=2406KiB/s (2463kB/s)(2408KiB/1001msec); 0 zone resets 00:17:30.260 slat (nsec): min=9088, max=49507, avg=26899.46, stdev=9261.71 00:17:30.260 clat (usec): min=313, max=1102, avg=697.75, stdev=112.65 00:17:30.260 lat (usec): min=324, max=1118, avg=724.65, stdev=117.13 00:17:30.260 clat percentiles (usec): 00:17:30.260 | 1.00th=[ 416], 5.00th=[ 478], 10.00th=[ 553], 20.00th=[ 611], 00:17:30.260 | 30.00th=[ 660], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 734], 00:17:30.260 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 832], 95.00th=[ 857], 00:17:30.260 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 1106], 99.95th=[ 1106], 00:17:30.260 | 99.99th=[ 1106] 00:17:30.260 bw ( KiB/s): min= 4096, max= 4096, per=47.61%, avg=4096.00, stdev= 0.00, samples=1 00:17:30.260 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:30.260 lat (usec) : 500=3.23%, 750=32.32%, 1000=27.56% 00:17:30.260 lat (msec) : 2=36.89% 00:17:30.260 cpu : usr=1.20%, sys=3.40%, ctx=1114, majf=0, minf=1 00:17:30.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:30.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.260 issued rwts: total=512,602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:30.260 job2: (groupid=0, jobs=1): err= 0: pid=444471: Mon Jul 15 23:53:45 2024 00:17:30.260 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:30.260 slat (nsec): min=7476, max=60764, avg=26887.25, stdev=3213.76 00:17:30.260 clat (usec): min=815, max=1315, avg=1063.18, stdev=77.40 00:17:30.260 lat (usec): min=842, max=1342, avg=1090.07, stdev=77.18 00:17:30.260 clat percentiles (usec): 00:17:30.260 | 1.00th=[ 865], 5.00th=[ 914], 10.00th=[ 971], 20.00th=[ 1004], 00:17:30.260 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:17:30.260 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:17:30.260 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1319], 99.95th=[ 1319], 00:17:30.260 | 99.99th=[ 1319] 00:17:30.260 write: IOPS=595, BW=2382KiB/s (2439kB/s)(2384KiB/1001msec); 0 zone resets 00:17:30.260 slat (nsec): min=9152, max=53465, avg=30821.56, stdev=8682.75 00:17:30.260 clat (usec): min=320, max=1087, avg=694.61, stdev=119.95 00:17:30.260 lat (usec): min=352, max=1119, avg=725.44, stdev=122.22 00:17:30.260 clat percentiles (usec): 00:17:30.260 | 1.00th=[ 347], 5.00th=[ 482], 10.00th=[ 537], 20.00th=[ 603], 00:17:30.260 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 734], 00:17:30.260 | 70.00th=[ 750], 80.00th=[ 791], 90.00th=[ 840], 95.00th=[ 881], 00:17:30.260 | 99.00th=[ 955], 99.50th=[ 1029], 99.90th=[ 1090], 99.95th=[ 1090], 00:17:30.260 | 99.99th=[ 1090] 00:17:30.260 bw ( KiB/s): min= 4096, max= 4096, per=47.61%, avg=4096.00, stdev= 0.00, samples=1 00:17:30.260 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:30.260 lat (usec) : 500=3.34%, 750=33.66%, 1000=25.18% 00:17:30.260 lat (msec) : 2=37.82% 00:17:30.260 cpu : usr=2.40%, sys=4.20%, ctx=1111, majf=0, minf=1 00:17:30.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:30.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.260 issued rwts: total=512,596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:30.260 job3: (groupid=0, jobs=1): err= 0: pid=444482: Mon Jul 15 23:53:45 2024 00:17:30.260 read: IOPS=30, BW=124KiB/s (127kB/s)(128KiB/1033msec) 00:17:30.260 slat (nsec): min=26432, max=27550, avg=26904.47, stdev=221.13 00:17:30.260 clat (usec): min=869, max=42103, avg=22704.56, stdev=20724.96 00:17:30.260 lat (usec): min=896, max=42130, avg=22731.47, stdev=20724.89 00:17:30.260 clat percentiles (usec): 00:17:30.260 | 1.00th=[ 873], 5.00th=[ 873], 10.00th=[ 889], 20.00th=[ 922], 00:17:30.260 | 30.00th=[ 1012], 40.00th=[ 1123], 50.00th=[41157], 60.00th=[41681], 00:17:30.260 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:30.260 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:30.260 | 99.99th=[42206] 00:17:30.260 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:17:30.260 slat (nsec): min=3545, max=51806, avg=13656.76, stdev=10263.77 00:17:30.260 clat (usec): min=175, max=1012, avg=574.89, stdev=152.81 00:17:30.260 lat (usec): min=181, max=1022, avg=588.54, stdev=158.16 00:17:30.260 clat percentiles (usec): 00:17:30.260 | 1.00th=[ 297], 5.00th=[ 371], 10.00th=[ 388], 20.00th=[ 420], 00:17:30.260 | 30.00th=[ 453], 40.00th=[ 502], 50.00th=[ 578], 60.00th=[ 635], 00:17:30.260 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 824], 00:17:30.260 | 99.00th=[ 881], 99.50th=[ 947], 99.90th=[ 1012], 99.95th=[ 1012], 00:17:30.260 | 99.99th=[ 1012] 00:17:30.260 bw ( KiB/s): min= 4096, max= 4096, per=47.61%, avg=4096.00, stdev= 0.00, samples=1 00:17:30.260 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:30.260 lat (usec) : 250=0.37%, 500=36.58%, 750=44.12%, 1000=14.52% 00:17:30.260 lat (msec) : 2=1.29%, 50=3.12% 00:17:30.260 cpu : usr=0.19%, sys=1.26%, ctx=546, majf=0, minf=1 00:17:30.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:30.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.260 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:30.260 00:17:30.260 Run status group 0 (all jobs): 00:17:30.260 READ: bw=4159KiB/s (4259kB/s), 71.6KiB/s-2046KiB/s (73.3kB/s-2095kB/s), io=4296KiB (4399kB), run=1001-1033msec 00:17:30.260 WRITE: bw=8604KiB/s (8811kB/s), 1983KiB/s-2406KiB/s (2030kB/s-2463kB/s), io=8888KiB (9101kB), run=1001-1033msec 00:17:30.260 00:17:30.260 Disk stats (read/write): 00:17:30.260 nvme0n1: ios=62/512, merge=0/0, ticks=502/259, in_queue=761, util=82.26% 00:17:30.260 nvme0n2: ios=427/512, merge=0/0, ticks=491/340, in_queue=831, util=87.19% 00:17:30.260 nvme0n3: ios=425/512, merge=0/0, ticks=560/290, in_queue=850, util=94.51% 00:17:30.260 nvme0n4: ios=84/512, merge=0/0, ticks=908/262, in_queue=1170, util=95.84% 00:17:30.260 23:53:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:30.260 [global] 00:17:30.260 thread=1 00:17:30.260 invalidate=1 00:17:30.260 rw=write 00:17:30.260 time_based=1 00:17:30.260 runtime=1 00:17:30.260 ioengine=libaio 00:17:30.260 direct=1 00:17:30.260 bs=4096 00:17:30.260 iodepth=128 00:17:30.260 norandommap=0 00:17:30.260 numjobs=1 00:17:30.260 00:17:30.260 verify_dump=1 00:17:30.260 verify_backlog=512 00:17:30.260 verify_state_save=0 00:17:30.260 do_verify=1 00:17:30.260 verify=crc32c-intel 00:17:30.260 [job0] 00:17:30.260 filename=/dev/nvme0n1 00:17:30.260 [job1] 00:17:30.260 filename=/dev/nvme0n2 00:17:30.260 [job2] 00:17:30.261 filename=/dev/nvme0n3 00:17:30.261 [job3] 00:17:30.261 filename=/dev/nvme0n4 00:17:30.261 Could not set queue depth (nvme0n1) 00:17:30.261 Could not set queue depth (nvme0n2) 00:17:30.261 Could not set queue depth (nvme0n3) 00:17:30.261 Could not set queue depth (nvme0n4) 00:17:30.828 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:30.828 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:30.828 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:30.828 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:30.828 fio-3.35 00:17:30.828 Starting 4 threads 00:17:32.227 00:17:32.227 job0: (groupid=0, jobs=1): err= 0: pid=444986: Mon Jul 15 23:53:47 2024 00:17:32.227 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:17:32.227 slat (nsec): min=877, max=30669k, avg=97621.16, stdev=928223.19 00:17:32.227 clat (usec): min=1972, max=109439, avg=12544.15, stdev=12842.61 00:17:32.227 lat (msec): min=2, max=109, avg=12.64, stdev=12.98 00:17:32.227 clat percentiles (msec): 00:17:32.227 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 7], 00:17:32.227 | 30.00th=[ 8], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:17:32.227 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 26], 95.00th=[ 39], 00:17:32.227 | 99.00th=[ 71], 99.50th=[ 99], 99.90th=[ 110], 99.95th=[ 110], 00:17:32.227 | 99.99th=[ 110] 00:17:32.227 write: IOPS=4944, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1009msec); 0 zone resets 00:17:32.227 slat (nsec): min=1666, max=15258k, avg=87477.39, stdev=644442.29 00:17:32.227 clat (usec): min=967, max=109405, avg=15177.72, stdev=18778.96 00:17:32.227 lat (usec): min=1001, max=109432, avg=15265.19, stdev=18867.95 00:17:32.227 clat percentiles (usec): 00:17:32.227 | 1.00th=[ 1598], 5.00th=[ 2507], 10.00th=[ 3294], 20.00th=[ 4686], 00:17:32.227 | 30.00th=[ 5866], 40.00th=[ 7373], 50.00th=[ 8455], 60.00th=[ 9503], 00:17:32.227 | 70.00th=[ 10814], 80.00th=[ 18220], 90.00th=[ 42206], 95.00th=[ 58459], 00:17:32.227 | 99.00th=[ 89654], 99.50th=[ 96994], 99.90th=[101188], 99.95th=[101188], 00:17:32.227 | 99.99th=[109577] 00:17:32.227 bw ( KiB/s): min=18248, max=20640, per=22.78%, avg=19444.00, stdev=1691.40, samples=2 00:17:32.227 iops : min= 4562, max= 5160, avg=4861.00, stdev=422.85, samples=2 00:17:32.227 lat (usec) : 1000=0.01% 00:17:32.227 lat (msec) : 2=1.31%, 4=9.00%, 10=53.66%, 20=20.02%, 50=10.36% 00:17:32.227 lat (msec) : 100=5.38%, 250=0.25% 00:17:32.227 cpu : usr=3.67%, sys=4.96%, ctx=380, majf=0, minf=1 00:17:32.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:32.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:32.227 issued rwts: total=4096,4989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:32.227 job1: (groupid=0, jobs=1): err= 0: pid=444993: Mon Jul 15 23:53:47 2024 00:17:32.227 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:17:32.227 slat (nsec): min=962, max=12137k, avg=73078.27, stdev=560486.23 00:17:32.227 clat (usec): min=3481, max=31459, avg=9785.01, stdev=4075.08 00:17:32.227 lat (usec): min=3547, max=31461, avg=9858.08, stdev=4107.11 00:17:32.227 clat percentiles (usec): 00:17:32.227 | 1.00th=[ 4490], 5.00th=[ 5211], 10.00th=[ 5538], 20.00th=[ 6390], 00:17:32.227 | 30.00th=[ 7046], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[ 9896], 00:17:32.227 | 70.00th=[11338], 80.00th=[12649], 90.00th=[14877], 95.00th=[17957], 00:17:32.227 | 99.00th=[23200], 99.50th=[25822], 99.90th=[30540], 99.95th=[31327], 00:17:32.227 | 99.99th=[31589] 00:17:32.227 write: IOPS=7114, BW=27.8MiB/s (29.1MB/s)(27.9MiB/1004msec); 0 zone resets 00:17:32.227 slat (nsec): min=1647, max=19688k, avg=66383.08, stdev=505922.64 00:17:32.227 clat (usec): min=1489, max=31455, avg=8646.40, stdev=4499.01 00:17:32.227 lat (usec): min=1876, max=31458, avg=8712.78, stdev=4517.66 00:17:32.227 clat percentiles (usec): 00:17:32.227 | 1.00th=[ 3064], 5.00th=[ 3818], 10.00th=[ 4359], 20.00th=[ 5342], 00:17:32.227 | 30.00th=[ 5866], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7898], 00:17:32.227 | 70.00th=[ 9372], 80.00th=[12911], 90.00th=[15401], 95.00th=[18744], 00:17:32.227 | 99.00th=[21627], 99.50th=[22152], 99.90th=[22152], 99.95th=[22152], 00:17:32.227 | 99.99th=[31327] 00:17:32.227 bw ( KiB/s): min=25680, max=30448, per=32.88%, avg=28064.00, stdev=3371.49, samples=2 00:17:32.227 iops : min= 6420, max= 7612, avg=7016.00, stdev=842.87, samples=2 00:17:32.227 lat (msec) : 2=0.06%, 4=3.81%, 10=62.93%, 20=29.77%, 50=3.43% 00:17:32.227 cpu : usr=5.88%, sys=6.08%, ctx=424, majf=0, minf=1 00:17:32.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:32.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:32.227 issued rwts: total=6656,7143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:32.227 job2: (groupid=0, jobs=1): err= 0: pid=445020: Mon Jul 15 23:53:47 2024 00:17:32.227 read: IOPS=4780, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1010msec) 00:17:32.227 slat (nsec): min=899, max=14787k, avg=77349.42, stdev=697179.26 00:17:32.227 clat (usec): min=1436, max=33357, avg=11739.97, stdev=4473.61 00:17:32.227 lat (usec): min=1461, max=33362, avg=11817.32, stdev=4520.83 00:17:32.227 clat percentiles (usec): 00:17:32.227 | 1.00th=[ 2245], 5.00th=[ 3884], 10.00th=[ 7504], 20.00th=[ 8455], 00:17:32.227 | 30.00th=[10028], 40.00th=[11207], 50.00th=[11600], 60.00th=[12125], 00:17:32.227 | 70.00th=[13173], 80.00th=[14091], 90.00th=[16712], 95.00th=[20055], 00:17:32.227 | 99.00th=[27395], 99.50th=[31327], 99.90th=[31327], 99.95th=[31327], 00:17:32.227 | 99.99th=[33424] 00:17:32.227 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:17:32.227 slat (nsec): min=1637, max=19068k, avg=99589.56, stdev=761784.56 00:17:32.227 clat (usec): min=868, max=56699, avg=13964.78, stdev=11445.20 00:17:32.227 lat (usec): min=901, max=56709, avg=14064.37, stdev=11524.62 00:17:32.227 clat percentiles (usec): 00:17:32.227 | 1.00th=[ 1647], 5.00th=[ 3982], 10.00th=[ 5014], 20.00th=[ 6259], 00:17:32.227 | 30.00th=[ 7046], 40.00th=[ 8029], 50.00th=[ 9896], 60.00th=[12649], 00:17:32.227 | 70.00th=[15533], 80.00th=[17957], 90.00th=[31065], 95.00th=[43254], 00:17:32.227 | 99.00th=[53216], 99.50th=[55837], 99.90th=[56886], 99.95th=[56886], 00:17:32.227 | 99.99th=[56886] 00:17:32.227 bw ( KiB/s): min=18352, max=21984, per=23.63%, avg=20168.00, stdev=2568.21, samples=2 00:17:32.227 iops : min= 4588, max= 5496, avg=5042.00, stdev=642.05, samples=2 00:17:32.227 lat (usec) : 1000=0.01% 00:17:32.227 lat (msec) : 2=1.38%, 4=3.65%, 10=35.06%, 20=49.56%, 50=9.22% 00:17:32.227 lat (msec) : 100=1.13% 00:17:32.227 cpu : usr=5.35%, sys=3.96%, ctx=305, majf=0, minf=1 00:17:32.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:32.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:32.227 issued rwts: total=4828,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:32.227 job3: (groupid=0, jobs=1): err= 0: pid=445031: Mon Jul 15 23:53:47 2024 00:17:32.227 read: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1016msec) 00:17:32.227 slat (nsec): min=1005, max=11737k, avg=100031.09, stdev=735021.00 00:17:32.227 clat (usec): min=4375, max=36517, avg=12671.79, stdev=4351.47 00:17:32.227 lat (usec): min=4380, max=36526, avg=12771.83, stdev=4405.18 00:17:32.227 clat percentiles (usec): 00:17:32.227 | 1.00th=[ 6849], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9372], 00:17:32.227 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11338], 60.00th=[12518], 00:17:32.227 | 70.00th=[13173], 80.00th=[15401], 90.00th=[19268], 95.00th=[21103], 00:17:32.227 | 99.00th=[27919], 99.50th=[30278], 99.90th=[36439], 99.95th=[36439], 00:17:32.227 | 99.99th=[36439] 00:17:32.227 write: IOPS=4360, BW=17.0MiB/s (17.9MB/s)(17.3MiB/1016msec); 0 zone resets 00:17:32.227 slat (nsec): min=1719, max=16782k, avg=128693.06, stdev=799714.75 00:17:32.227 clat (usec): min=1168, max=79110, avg=17266.56, stdev=15701.39 00:17:32.227 lat (usec): min=1179, max=79114, avg=17395.25, stdev=15783.89 00:17:32.227 clat percentiles (usec): 00:17:32.227 | 1.00th=[ 3490], 5.00th=[ 5735], 10.00th=[ 6587], 20.00th=[ 7373], 00:17:32.227 | 30.00th=[ 8291], 40.00th=[ 9634], 50.00th=[11600], 60.00th=[13829], 00:17:32.227 | 70.00th=[16450], 80.00th=[20317], 90.00th=[40109], 95.00th=[57934], 00:17:32.227 | 99.00th=[76022], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:17:32.227 | 99.99th=[79168] 00:17:32.227 bw ( KiB/s): min=16464, max=17952, per=20.16%, avg=17208.00, stdev=1052.17, samples=2 00:17:32.228 iops : min= 4116, max= 4488, avg=4302.00, stdev=263.04, samples=2 00:17:32.228 lat (msec) : 2=0.02%, 4=0.65%, 10=34.34%, 20=49.41%, 50=12.23% 00:17:32.228 lat (msec) : 100=3.34% 00:17:32.228 cpu : usr=4.53%, sys=3.55%, ctx=347, majf=0, minf=1 00:17:32.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:32.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:32.228 issued rwts: total=4096,4430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:32.228 00:17:32.228 Run status group 0 (all jobs): 00:17:32.228 READ: bw=75.6MiB/s (79.3MB/s), 15.7MiB/s-25.9MiB/s (16.5MB/s-27.2MB/s), io=76.9MiB (80.6MB), run=1004-1016msec 00:17:32.228 WRITE: bw=83.4MiB/s (87.4MB/s), 17.0MiB/s-27.8MiB/s (17.9MB/s-29.1MB/s), io=84.7MiB (88.8MB), run=1004-1016msec 00:17:32.228 00:17:32.228 Disk stats (read/write): 00:17:32.228 nvme0n1: ios=2765/3072, merge=0/0, ticks=33362/57549, in_queue=90911, util=96.99% 00:17:32.228 nvme0n2: ios=5017/5120, merge=0/0, ticks=50024/45564, in_queue=95588, util=94.52% 00:17:32.228 nvme0n3: ios=4050/4096, merge=0/0, ticks=48227/43394, in_queue=91621, util=95.38% 00:17:32.228 nvme0n4: ios=3625/3735, merge=0/0, ticks=44086/52169, in_queue=96255, util=96.18% 00:17:32.228 23:53:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:32.228 [global] 00:17:32.228 thread=1 00:17:32.228 invalidate=1 00:17:32.228 rw=randwrite 00:17:32.228 time_based=1 00:17:32.228 runtime=1 00:17:32.228 ioengine=libaio 00:17:32.228 direct=1 00:17:32.228 bs=4096 00:17:32.228 iodepth=128 00:17:32.228 norandommap=0 00:17:32.228 numjobs=1 00:17:32.228 00:17:32.228 verify_dump=1 00:17:32.228 verify_backlog=512 00:17:32.228 verify_state_save=0 00:17:32.228 do_verify=1 00:17:32.228 verify=crc32c-intel 00:17:32.228 [job0] 00:17:32.228 filename=/dev/nvme0n1 00:17:32.228 [job1] 00:17:32.228 filename=/dev/nvme0n2 00:17:32.228 [job2] 00:17:32.228 filename=/dev/nvme0n3 00:17:32.228 [job3] 00:17:32.228 filename=/dev/nvme0n4 00:17:32.228 Could not set queue depth (nvme0n1) 00:17:32.228 Could not set queue depth (nvme0n2) 00:17:32.228 Could not set queue depth (nvme0n3) 00:17:32.228 Could not set queue depth (nvme0n4) 00:17:32.487 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:32.487 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:32.487 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:32.487 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:32.487 fio-3.35 00:17:32.487 Starting 4 threads 00:17:33.907 00:17:33.907 job0: (groupid=0, jobs=1): err= 0: pid=445517: Mon Jul 15 23:53:48 2024 00:17:33.907 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:17:33.907 slat (nsec): min=881, max=5244.1k, avg=68258.08, stdev=385366.60 00:17:33.907 clat (usec): min=1638, max=18161, avg=8828.81, stdev=1557.30 00:17:33.907 lat (usec): min=1652, max=18167, avg=8897.07, stdev=1587.08 00:17:33.907 clat percentiles (usec): 00:17:33.907 | 1.00th=[ 4948], 5.00th=[ 6849], 10.00th=[ 7373], 20.00th=[ 7767], 00:17:33.907 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:17:33.907 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:17:33.907 | 99.00th=[14091], 99.50th=[14615], 99.90th=[16057], 99.95th=[16057], 00:17:33.907 | 99.99th=[18220] 00:17:33.907 write: IOPS=7395, BW=28.9MiB/s (30.3MB/s)(28.9MiB/1001msec); 0 zone resets 00:17:33.907 slat (nsec): min=1518, max=10221k, avg=63540.20, stdev=372142.56 00:17:33.907 clat (usec): min=819, max=22637, avg=8553.07, stdev=2412.14 00:17:33.907 lat (usec): min=1696, max=22646, avg=8616.62, stdev=2438.18 00:17:33.907 clat percentiles (usec): 00:17:33.907 | 1.00th=[ 5276], 5.00th=[ 6128], 10.00th=[ 6456], 20.00th=[ 6783], 00:17:33.907 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8356], 00:17:33.907 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[13435], 00:17:33.907 | 99.00th=[18220], 99.50th=[18482], 99.90th=[22676], 99.95th=[22676], 00:17:33.907 | 99.99th=[22676] 00:17:33.907 bw ( KiB/s): min=26256, max=26256, per=28.17%, avg=26256.00, stdev= 0.00, samples=1 00:17:33.907 iops : min= 6564, max= 6564, avg=6564.00, stdev= 0.00, samples=1 00:17:33.907 lat (usec) : 1000=0.01% 00:17:33.907 lat (msec) : 2=0.06%, 4=0.71%, 10=82.77%, 20=16.28%, 50=0.18% 00:17:33.907 cpu : usr=3.50%, sys=4.10%, ctx=799, majf=0, minf=1 00:17:33.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:33.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:33.907 issued rwts: total=7168,7403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:33.907 job1: (groupid=0, jobs=1): err= 0: pid=445521: Mon Jul 15 23:53:48 2024 00:17:33.907 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:17:33.907 slat (nsec): min=873, max=9229.3k, avg=74914.03, stdev=485160.34 00:17:33.907 clat (usec): min=3022, max=30479, avg=9179.96, stdev=3534.18 00:17:33.907 lat (usec): min=3050, max=30483, avg=9254.88, stdev=3569.36 00:17:33.907 clat percentiles (usec): 00:17:33.907 | 1.00th=[ 4293], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 7308], 00:17:33.907 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8356], 00:17:33.907 | 70.00th=[ 8979], 80.00th=[10814], 90.00th=[13173], 95.00th=[15664], 00:17:33.907 | 99.00th=[25560], 99.50th=[27132], 99.90th=[30016], 99.95th=[30540], 00:17:33.907 | 99.99th=[30540] 00:17:33.907 write: IOPS=6583, BW=25.7MiB/s (27.0MB/s)(25.8MiB/1003msec); 0 zone resets 00:17:33.907 slat (nsec): min=1455, max=7464.9k, avg=77714.57, stdev=382242.43 00:17:33.907 clat (usec): min=2286, max=34430, avg=10688.03, stdev=6711.46 00:17:33.907 lat (usec): min=2298, max=34435, avg=10765.74, stdev=6755.66 00:17:33.907 clat percentiles (usec): 00:17:33.907 | 1.00th=[ 3785], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 6587], 00:17:33.907 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 8160], 00:17:33.907 | 70.00th=[10421], 80.00th=[13829], 90.00th=[21365], 95.00th=[27132], 00:17:33.907 | 99.00th=[31589], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:17:33.907 | 99.99th=[34341] 00:17:33.907 bw ( KiB/s): min=19040, max=32768, per=27.80%, avg=25904.00, stdev=9707.16, samples=2 00:17:33.907 iops : min= 4760, max= 8192, avg=6476.00, stdev=2426.79, samples=2 00:17:33.907 lat (msec) : 4=1.10%, 10=70.39%, 20=20.48%, 50=8.03% 00:17:33.907 cpu : usr=4.29%, sys=4.19%, ctx=738, majf=0, minf=1 00:17:33.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:33.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:33.907 issued rwts: total=6144,6603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:33.907 job2: (groupid=0, jobs=1): err= 0: pid=445539: Mon Jul 15 23:53:48 2024 00:17:33.907 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.7MiB/1043msec) 00:17:33.907 slat (nsec): min=899, max=9449.5k, avg=95780.93, stdev=602490.21 00:17:33.907 clat (usec): min=3469, max=53880, avg=13054.24, stdev=6487.53 00:17:33.907 lat (usec): min=3474, max=56981, avg=13150.02, stdev=6510.53 00:17:33.907 clat percentiles (usec): 00:17:33.907 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[10421], 00:17:33.907 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:17:33.907 | 70.00th=[12518], 80.00th=[13829], 90.00th=[15664], 95.00th=[22938], 00:17:33.907 | 99.00th=[49546], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:17:33.907 | 99.99th=[53740] 00:17:33.907 write: IOPS=5399, BW=21.1MiB/s (22.1MB/s)(22.0MiB/1043msec); 0 zone resets 00:17:33.907 slat (nsec): min=1510, max=10907k, avg=83423.12, stdev=428603.51 00:17:33.907 clat (usec): min=1114, max=47985, avg=11243.08, stdev=4066.15 00:17:33.907 lat (usec): min=1123, max=47987, avg=11326.50, stdev=4086.79 00:17:33.907 clat percentiles (usec): 00:17:33.907 | 1.00th=[ 4359], 5.00th=[ 6325], 10.00th=[ 7504], 20.00th=[ 8848], 00:17:33.907 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[11076], 00:17:33.907 | 70.00th=[11600], 80.00th=[12125], 90.00th=[15401], 95.00th=[17957], 00:17:33.907 | 99.00th=[25822], 99.50th=[32113], 99.90th=[46924], 99.95th=[46924], 00:17:33.907 | 99.99th=[47973] 00:17:33.907 bw ( KiB/s): min=21008, max=24048, per=24.17%, avg=22528.00, stdev=2149.60, samples=2 00:17:33.907 iops : min= 5252, max= 6012, avg=5632.00, stdev=537.40, samples=2 00:17:33.907 lat (msec) : 2=0.10%, 4=0.49%, 10=21.74%, 20=73.38%, 50=3.91% 00:17:33.907 lat (msec) : 100=0.38% 00:17:33.907 cpu : usr=3.45%, sys=3.17%, ctx=677, majf=0, minf=1 00:17:33.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:33.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:33.907 issued rwts: total=5288,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:33.907 job3: (groupid=0, jobs=1): err= 0: pid=445546: Mon Jul 15 23:53:48 2024 00:17:33.907 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:17:33.907 slat (nsec): min=935, max=15946k, avg=94486.60, stdev=727344.85 00:17:33.907 clat (usec): min=4280, max=53603, avg=13239.30, stdev=6633.07 00:17:33.907 lat (usec): min=4303, max=56890, avg=13333.78, stdev=6682.86 00:17:33.907 clat percentiles (usec): 00:17:33.907 | 1.00th=[ 4490], 5.00th=[ 6259], 10.00th=[ 8029], 20.00th=[ 9110], 00:17:33.907 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10552], 60.00th=[12256], 00:17:33.907 | 70.00th=[14877], 80.00th=[15926], 90.00th=[22938], 95.00th=[25035], 00:17:33.907 | 99.00th=[47449], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:17:33.907 | 99.99th=[53740] 00:17:33.907 write: IOPS=4639, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1005msec); 0 zone resets 00:17:33.907 slat (nsec): min=1531, max=15832k, avg=106359.49, stdev=696449.53 00:17:33.907 clat (usec): min=661, max=70191, avg=14222.64, stdev=9532.20 00:17:33.907 lat (usec): min=687, max=70199, avg=14329.00, stdev=9588.54 00:17:33.907 clat percentiles (usec): 00:17:33.907 | 1.00th=[ 2073], 5.00th=[ 5735], 10.00th=[ 7635], 20.00th=[ 8586], 00:17:33.907 | 30.00th=[ 9110], 40.00th=[10683], 50.00th=[11994], 60.00th=[13566], 00:17:33.907 | 70.00th=[14615], 80.00th=[17171], 90.00th=[23725], 95.00th=[30016], 00:17:33.907 | 99.00th=[63177], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:17:33.907 | 99.99th=[69731] 00:17:33.907 bw ( KiB/s): min=18336, max=18528, per=19.78%, avg=18432.00, stdev=135.76, samples=2 00:17:33.907 iops : min= 4584, max= 4632, avg=4608.00, stdev=33.94, samples=2 00:17:33.907 lat (usec) : 750=0.02%, 1000=0.11% 00:17:33.907 lat (msec) : 2=0.26%, 4=1.13%, 10=33.89%, 20=51.21%, 50=12.35% 00:17:33.907 lat (msec) : 100=1.02% 00:17:33.907 cpu : usr=3.49%, sys=4.38%, ctx=407, majf=0, minf=1 00:17:33.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:33.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:33.907 issued rwts: total=4608,4663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:33.907 00:17:33.907 Run status group 0 (all jobs): 00:17:33.907 READ: bw=86.9MiB/s (91.1MB/s), 17.9MiB/s-28.0MiB/s (18.8MB/s-29.3MB/s), io=90.7MiB (95.1MB), run=1001-1043msec 00:17:33.907 WRITE: bw=91.0MiB/s (95.4MB/s), 18.1MiB/s-28.9MiB/s (19.0MB/s-30.3MB/s), io=94.9MiB (99.5MB), run=1001-1043msec 00:17:33.907 00:17:33.907 Disk stats (read/write): 00:17:33.907 nvme0n1: ios=5785/6144, merge=0/0, ticks=22717/21572, in_queue=44289, util=96.69% 00:17:33.907 nvme0n2: ios=5156/5607, merge=0/0, ticks=30209/44784, in_queue=74993, util=88.07% 00:17:33.907 nvme0n3: ios=4389/4608, merge=0/0, ticks=34692/34648, in_queue=69340, util=88.50% 00:17:33.907 nvme0n4: ios=4028/4096, merge=0/0, ticks=34709/41349, in_queue=76058, util=99.57% 00:17:33.907 23:53:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:33.908 23:53:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=445829 00:17:33.908 23:53:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:33.908 23:53:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:33.908 [global] 00:17:33.908 thread=1 00:17:33.908 invalidate=1 00:17:33.908 rw=read 00:17:33.908 time_based=1 00:17:33.908 runtime=10 00:17:33.908 ioengine=libaio 00:17:33.908 direct=1 00:17:33.908 bs=4096 00:17:33.908 iodepth=1 00:17:33.908 norandommap=1 00:17:33.908 numjobs=1 00:17:33.908 00:17:33.908 [job0] 00:17:33.908 filename=/dev/nvme0n1 00:17:33.908 [job1] 00:17:33.908 filename=/dev/nvme0n2 00:17:33.908 [job2] 00:17:33.908 filename=/dev/nvme0n3 00:17:33.908 [job3] 00:17:33.908 filename=/dev/nvme0n4 00:17:33.908 Could not set queue depth (nvme0n1) 00:17:33.908 Could not set queue depth (nvme0n2) 00:17:33.908 Could not set queue depth (nvme0n3) 00:17:33.908 Could not set queue depth (nvme0n4) 00:17:34.170 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:34.170 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:34.170 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:34.170 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:34.170 fio-3.35 00:17:34.170 Starting 4 threads 00:17:36.799 23:53:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:36.799 23:53:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:36.799 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3911680, buflen=4096 00:17:36.799 fio: pid=446059, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:37.058 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9834496, buflen=4096 00:17:37.058 fio: pid=446052, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:37.058 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:37.058 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:37.058 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=6512640, buflen=4096 00:17:37.058 fio: pid=446034, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:37.058 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:37.058 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:37.317 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=311296, buflen=4096 00:17:37.317 fio: pid=446038, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:37.317 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:37.317 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:37.317 00:17:37.317 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=446034: Mon Jul 15 23:53:52 2024 00:17:37.317 read: IOPS=545, BW=2181KiB/s (2233kB/s)(6360KiB/2916msec) 00:17:37.317 slat (usec): min=5, max=18549, avg=44.51, stdev=559.96 00:17:37.317 clat (usec): min=493, max=42160, avg=1782.66, stdev=5517.50 00:17:37.317 lat (usec): min=498, max=42185, avg=1827.18, stdev=5543.67 00:17:37.317 clat percentiles (usec): 00:17:37.317 | 1.00th=[ 570], 5.00th=[ 709], 10.00th=[ 783], 20.00th=[ 873], 00:17:37.317 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[ 979], 60.00th=[ 1106], 00:17:37.317 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[ 1270], 95.00th=[ 1303], 00:17:37.317 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:37.317 | 99.99th=[42206] 00:17:37.317 bw ( KiB/s): min= 936, max= 4376, per=36.17%, avg=2355.20, stdev=1362.23, samples=5 00:17:37.317 iops : min= 234, max= 1094, avg=588.80, stdev=340.56, samples=5 00:17:37.317 lat (usec) : 500=0.13%, 750=7.23%, 1000=45.69% 00:17:37.317 lat (msec) : 2=45.00%, 50=1.89% 00:17:37.317 cpu : usr=1.10%, sys=1.89%, ctx=1594, majf=0, minf=1 00:17:37.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.317 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.317 issued rwts: total=1591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:37.317 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=446038: Mon Jul 15 23:53:52 2024 00:17:37.317 read: IOPS=24, BW=98.5KiB/s (101kB/s)(304KiB/3085msec) 00:17:37.317 slat (usec): min=7, max=8441, avg=232.77, stdev=1278.37 00:17:37.317 clat (usec): min=940, max=42131, avg=40336.54, stdev=7874.90 00:17:37.317 lat (usec): min=979, max=49895, avg=40572.03, stdev=8009.05 00:17:37.317 clat percentiles (usec): 00:17:37.317 | 1.00th=[ 938], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:37.317 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:37.317 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:37.317 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:37.317 | 99.99th=[42206] 00:17:37.317 bw ( KiB/s): min= 96, max= 104, per=1.52%, avg=99.20, stdev= 4.38, samples=5 00:17:37.317 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:17:37.317 lat (usec) : 1000=1.30% 00:17:37.317 lat (msec) : 2=1.30%, 4=1.30%, 50=94.81% 00:17:37.317 cpu : usr=0.16%, sys=0.00%, ctx=79, majf=0, minf=1 00:17:37.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.317 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.317 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:37.317 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=446052: Mon Jul 15 23:53:52 2024 00:17:37.317 read: IOPS=880, BW=3521KiB/s (3605kB/s)(9604KiB/2728msec) 00:17:37.317 slat (usec): min=6, max=12811, avg=35.78, stdev=344.24 00:17:37.317 clat (usec): min=355, max=1349, avg=1093.54, stdev=134.49 00:17:37.317 lat (usec): min=366, max=13759, avg=1124.74, stdev=291.29 00:17:37.317 clat percentiles (usec): 00:17:37.317 | 1.00th=[ 502], 5.00th=[ 840], 10.00th=[ 914], 20.00th=[ 1020], 00:17:37.317 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:17:37.317 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:17:37.317 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1319], 99.95th=[ 1352], 00:17:37.317 | 99.99th=[ 1352] 00:17:37.317 bw ( KiB/s): min= 3376, max= 3656, per=53.74%, avg=3499.20, stdev=119.55, samples=5 00:17:37.317 iops : min= 844, max= 914, avg=874.80, stdev=29.89, samples=5 00:17:37.317 lat (usec) : 500=1.04%, 750=1.29%, 1000=15.78% 00:17:37.317 lat (msec) : 2=81.85% 00:17:37.317 cpu : usr=0.99%, sys=3.96%, ctx=2404, majf=0, minf=1 00:17:37.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.317 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.317 issued rwts: total=2402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:37.317 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=446059: Mon Jul 15 23:53:52 2024 00:17:37.317 read: IOPS=367, BW=1470KiB/s (1505kB/s)(3820KiB/2599msec) 00:17:37.317 slat (nsec): min=5392, max=67086, avg=25499.37, stdev=5145.66 00:17:37.317 clat (usec): min=416, max=44130, avg=2688.85, stdev=8316.81 00:17:37.317 lat (usec): min=440, max=44161, avg=2714.35, stdev=8317.02 00:17:37.317 clat percentiles (usec): 00:17:37.317 | 1.00th=[ 545], 5.00th=[ 742], 10.00th=[ 807], 20.00th=[ 873], 00:17:37.317 | 30.00th=[ 906], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 971], 00:17:37.317 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1057], 95.00th=[ 1139], 00:17:37.317 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:17:37.317 | 99.99th=[44303] 00:17:37.317 bw ( KiB/s): min= 96, max= 4184, per=23.39%, avg=1523.20, stdev=1988.56, samples=5 00:17:37.317 iops : min= 24, max= 1046, avg=380.80, stdev=497.14, samples=5 00:17:37.317 lat (usec) : 500=0.42%, 750=5.02%, 1000=71.13% 00:17:37.317 lat (msec) : 2=19.04%, 50=4.29% 00:17:37.317 cpu : usr=0.38%, sys=1.66%, ctx=956, majf=0, minf=2 00:17:37.317 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.317 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.317 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.317 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:37.317 00:17:37.317 Run status group 0 (all jobs): 00:17:37.317 READ: bw=6512KiB/s (6668kB/s), 98.5KiB/s-3521KiB/s (101kB/s-3605kB/s), io=19.6MiB (20.6MB), run=2599-3085msec 00:17:37.317 00:17:37.317 Disk stats (read/write): 00:17:37.317 nvme0n1: ios=1580/0, merge=0/0, ticks=2566/0, in_queue=2566, util=93.79% 00:17:37.317 nvme0n2: ios=70/0, merge=0/0, ticks=2816/0, in_queue=2816, util=94.83% 00:17:37.317 nvme0n3: ios=2277/0, merge=0/0, ticks=2275/0, in_queue=2275, util=96.03% 00:17:37.317 nvme0n4: ios=954/0, merge=0/0, ticks=2442/0, in_queue=2442, util=96.42% 00:17:37.576 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:37.576 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:37.576 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:37.576 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:37.836 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:37.837 23:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:38.096 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:38.096 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:38.096 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:38.096 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 445829 00:17:38.096 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:38.096 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:38.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1213 -- # local i=0 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1225 -- # return 0 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:38.355 nvmf hotplug test: fio failed as expected 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.355 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.615 rmmod nvme_tcp 00:17:38.615 rmmod nvme_fabrics 00:17:38.615 rmmod nvme_keyring 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 442323 ']' 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 442323 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@942 -- # '[' -z 442323 ']' 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # kill -0 442323 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # uname 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 442323 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 442323' 00:17:38.615 killing process with pid 442323 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@961 -- # kill 442323 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # wait 442323 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.615 23:53:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.159 23:53:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.159 00:17:41.159 real 0m29.265s 00:17:41.159 user 2m40.317s 00:17:41.159 sys 0m9.522s 00:17:41.159 23:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:41.159 23:53:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.159 ************************************ 00:17:41.159 END TEST nvmf_fio_target 00:17:41.159 ************************************ 00:17:41.159 23:53:55 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:17:41.159 23:53:55 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:41.159 23:53:55 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:17:41.159 23:53:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:41.159 23:53:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:41.159 ************************************ 00:17:41.159 START TEST nvmf_bdevio 00:17:41.159 ************************************ 00:17:41.159 23:53:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:41.159 * Looking for test storage... 00:17:41.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:41.160 23:53:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:49.301 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:49.301 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:49.301 Found net devices under 0000:31:00.0: cvl_0_0 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:49.301 Found net devices under 0000:31:00.1: cvl_0_1 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.301 23:54:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.301 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:49.301 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.301 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.301 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.301 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:49.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.747 ms 00:17:49.301 00:17:49.301 --- 10.0.0.2 ping statistics --- 00:17:49.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.301 rtt min/avg/max/mdev = 0.747/0.747/0.747/0.000 ms 00:17:49.301 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:17:49.301 00:17:49.301 --- 10.0.0.1 ping statistics --- 00:17:49.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.302 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=451823 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 451823 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@823 -- # '[' -z 451823 ']' 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:49.302 23:54:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.302 [2024-07-15 23:54:04.235281] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:17:49.302 [2024-07-15 23:54:04.235332] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.302 [2024-07-15 23:54:04.324754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.302 [2024-07-15 23:54:04.388346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.302 [2024-07-15 23:54:04.388380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.302 [2024-07-15 23:54:04.388387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.302 [2024-07-15 23:54:04.388394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.302 [2024-07-15 23:54:04.388400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.302 [2024-07-15 23:54:04.388536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:49.302 [2024-07-15 23:54:04.388689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:49.302 [2024-07-15 23:54:04.388838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.302 [2024-07-15 23:54:04.388839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:49.868 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:49.868 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # return 0 00:17:49.868 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.868 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.868 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.868 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.868 23:54:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.868 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:49.868 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:49.868 [2024-07-15 23:54:05.054830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:50.125 Malloc0 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:50.125 [2024-07-15 23:54:05.097959] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.125 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.125 { 00:17:50.125 "params": { 00:17:50.125 "name": "Nvme$subsystem", 00:17:50.125 "trtype": "$TEST_TRANSPORT", 00:17:50.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.125 "adrfam": "ipv4", 00:17:50.125 "trsvcid": "$NVMF_PORT", 00:17:50.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.126 "hdgst": ${hdgst:-false}, 00:17:50.126 "ddgst": ${ddgst:-false} 00:17:50.126 }, 00:17:50.126 "method": "bdev_nvme_attach_controller" 00:17:50.126 } 00:17:50.126 EOF 00:17:50.126 )") 00:17:50.126 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:50.126 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:50.126 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:50.126 23:54:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.126 "params": { 00:17:50.126 "name": "Nvme1", 00:17:50.126 "trtype": "tcp", 00:17:50.126 "traddr": "10.0.0.2", 00:17:50.126 "adrfam": "ipv4", 00:17:50.126 "trsvcid": "4420", 00:17:50.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.126 "hdgst": false, 00:17:50.126 "ddgst": false 00:17:50.126 }, 00:17:50.126 "method": "bdev_nvme_attach_controller" 00:17:50.126 }' 00:17:50.126 [2024-07-15 23:54:05.152588] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:17:50.126 [2024-07-15 23:54:05.152636] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451967 ] 00:17:50.126 [2024-07-15 23:54:05.217052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:50.126 [2024-07-15 23:54:05.283731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.126 [2024-07-15 23:54:05.283741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.126 [2024-07-15 23:54:05.283744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.382 I/O targets: 00:17:50.382 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:50.382 00:17:50.382 00:17:50.382 CUnit - A unit testing framework for C - Version 2.1-3 00:17:50.382 http://cunit.sourceforge.net/ 00:17:50.382 00:17:50.382 00:17:50.382 Suite: bdevio tests on: Nvme1n1 00:17:50.382 Test: blockdev write read block ...passed 00:17:50.382 Test: blockdev write zeroes read block ...passed 00:17:50.382 Test: blockdev write zeroes read no split ...passed 00:17:50.639 Test: blockdev write zeroes read split ...passed 00:17:50.639 Test: blockdev write zeroes read split partial ...passed 00:17:50.639 Test: blockdev reset ...[2024-07-15 23:54:05.642656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:50.639 [2024-07-15 23:54:05.642724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134d370 (9): Bad file descriptor 00:17:50.639 [2024-07-15 23:54:05.660024] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:50.639 passed 00:17:50.639 Test: blockdev write read 8 blocks ...passed 00:17:50.639 Test: blockdev write read size > 128k ...passed 00:17:50.639 Test: blockdev write read invalid size ...passed 00:17:50.639 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:50.639 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:50.639 Test: blockdev write read max offset ...passed 00:17:50.897 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:50.897 Test: blockdev writev readv 8 blocks ...passed 00:17:50.897 Test: blockdev writev readv 30 x 1block ...passed 00:17:50.897 Test: blockdev writev readv block ...passed 00:17:50.897 Test: blockdev writev readv size > 128k ...passed 00:17:50.897 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:50.897 Test: blockdev comparev and writev ...[2024-07-15 23:54:05.927130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.897 [2024-07-15 23:54:05.927156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:05.927167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.897 [2024-07-15 23:54:05.927173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:05.927721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.897 [2024-07-15 23:54:05.927731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:05.927740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.897 [2024-07-15 23:54:05.927745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:05.928257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.897 [2024-07-15 23:54:05.928265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:05.928275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.897 [2024-07-15 23:54:05.928280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:05.928781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.897 [2024-07-15 23:54:05.928789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:05.928802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:50.897 [2024-07-15 23:54:05.928807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:50.897 passed 00:17:50.897 Test: blockdev nvme passthru rw ...passed 00:17:50.897 Test: blockdev nvme passthru vendor specific ...[2024-07-15 23:54:06.013200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.897 [2024-07-15 23:54:06.013217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:06.013574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.897 [2024-07-15 23:54:06.013582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:06.013954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.897 [2024-07-15 23:54:06.013962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:50.897 [2024-07-15 23:54:06.014334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.897 [2024-07-15 23:54:06.014343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:50.897 passed 00:17:50.897 Test: blockdev nvme admin passthru ...passed 00:17:50.897 Test: blockdev copy ...passed 00:17:50.897 00:17:50.897 Run Summary: Type Total Ran Passed Failed Inactive 00:17:50.897 suites 1 1 n/a 0 0 00:17:50.897 tests 23 23 23 0 0 00:17:50.897 asserts 152 152 152 0 n/a 00:17:50.897 00:17:50.897 Elapsed time = 1.215 seconds 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.156 rmmod nvme_tcp 00:17:51.156 rmmod nvme_fabrics 00:17:51.156 rmmod nvme_keyring 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 451823 ']' 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 451823 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@942 -- # '[' -z 451823 ']' 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # kill -0 451823 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # uname 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 451823 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # process_name=reactor_3 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' reactor_3 = sudo ']' 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@960 -- # echo 'killing process with pid 451823' 00:17:51.156 killing process with pid 451823 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@961 -- # kill 451823 00:17:51.156 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # wait 451823 00:17:51.416 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.416 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.416 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.416 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.416 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.416 23:54:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.416 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.416 23:54:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.978 23:54:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:53.978 00:17:53.978 real 0m12.648s 00:17:53.978 user 0m12.759s 00:17:53.978 sys 0m6.524s 00:17:53.978 23:54:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1118 -- # xtrace_disable 00:17:53.978 23:54:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:53.978 ************************************ 00:17:53.978 END TEST nvmf_bdevio 00:17:53.978 ************************************ 00:17:53.978 23:54:08 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:17:53.978 23:54:08 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:53.978 23:54:08 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:17:53.978 23:54:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:17:53.978 23:54:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:53.978 ************************************ 00:17:53.978 START TEST nvmf_auth_target 00:17:53.978 ************************************ 00:17:53.978 23:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:53.978 * Looking for test storage... 00:17:53.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.978 23:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.978 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:53.978 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.978 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.978 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.978 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.978 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:53.979 23:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:02.135 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:02.135 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:02.135 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:02.136 Found net devices under 0000:31:00.0: cvl_0_0 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:02.136 Found net devices under 0000:31:00.1: cvl_0_1 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:02.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:18:02.136 00:18:02.136 --- 10.0.0.2 ping statistics --- 00:18:02.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.136 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:18:02.136 00:18:02.136 --- 10.0.0.1 ping statistics --- 00:18:02.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.136 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:02.136 23:54:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=457319 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 457319 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 457319 ']' 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:02.136 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=457664 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:02.707 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=274e673c206a62402a0276212559101648ac9ef90eeb4c65 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5kv 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 274e673c206a62402a0276212559101648ac9ef90eeb4c65 0 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 274e673c206a62402a0276212559101648ac9ef90eeb4c65 0 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=274e673c206a62402a0276212559101648ac9ef90eeb4c65 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5kv 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5kv 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.5kv 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=81f048027b9329771c0e19d368f272c561eaac0c18824faeea9f358e0d2e896e 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.QLq 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 81f048027b9329771c0e19d368f272c561eaac0c18824faeea9f358e0d2e896e 3 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 81f048027b9329771c0e19d368f272c561eaac0c18824faeea9f358e0d2e896e 3 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=81f048027b9329771c0e19d368f272c561eaac0c18824faeea9f358e0d2e896e 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:02.969 23:54:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.QLq 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.QLq 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.QLq 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=25d72513b6c862cc19d014279ae0189e 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fGp 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 25d72513b6c862cc19d014279ae0189e 1 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 25d72513b6c862cc19d014279ae0189e 1 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=25d72513b6c862cc19d014279ae0189e 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fGp 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fGp 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.fGp 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e92b0219bfdec45a387cb46c086a9d6efcf03f0b3bfec5d8 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vWc 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e92b0219bfdec45a387cb46c086a9d6efcf03f0b3bfec5d8 2 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e92b0219bfdec45a387cb46c086a9d6efcf03f0b3bfec5d8 2 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e92b0219bfdec45a387cb46c086a9d6efcf03f0b3bfec5d8 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vWc 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vWc 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.vWc 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=22964afcbda85c44373aa26f3fd63d8c8fd565fb25f65a1e 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Sz0 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 22964afcbda85c44373aa26f3fd63d8c8fd565fb25f65a1e 2 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 22964afcbda85c44373aa26f3fd63d8c8fd565fb25f65a1e 2 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=22964afcbda85c44373aa26f3fd63d8c8fd565fb25f65a1e 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:02.969 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Sz0 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Sz0 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Sz0 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5956e8ae5bf3e9fe26c418bf232146af 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9ze 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5956e8ae5bf3e9fe26c418bf232146af 1 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5956e8ae5bf3e9fe26c418bf232146af 1 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5956e8ae5bf3e9fe26c418bf232146af 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9ze 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9ze 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.9ze 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=41c43cb61e8cb0fe440f88ad4bcfe710ee437909d16388269484ac6e6a80a891 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cmI 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 41c43cb61e8cb0fe440f88ad4bcfe710ee437909d16388269484ac6e6a80a891 3 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 41c43cb61e8cb0fe440f88ad4bcfe710ee437909d16388269484ac6e6a80a891 3 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=41c43cb61e8cb0fe440f88ad4bcfe710ee437909d16388269484ac6e6a80a891 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cmI 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cmI 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.cmI 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 457319 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 457319 ']' 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:03.230 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 457664 /var/tmp/host.sock 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 457664 ']' 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:03.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5kv 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.5kv 00:18:03.490 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.5kv 00:18:03.750 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.QLq ]] 00:18:03.750 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QLq 00:18:03.750 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:03.750 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.750 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:03.750 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QLq 00:18:03.750 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QLq 00:18:04.009 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:04.009 23:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.fGp 00:18:04.009 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:04.009 23:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.fGp 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.fGp 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.vWc ]] 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vWc 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vWc 00:18:04.009 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vWc 00:18:04.268 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:04.268 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Sz0 00:18:04.268 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:04.268 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.268 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:04.268 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Sz0 00:18:04.268 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Sz0 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.9ze ]] 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9ze 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9ze 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9ze 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cmI 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.cmI 00:18:04.528 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.cmI 00:18:04.788 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:04.788 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:04.788 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.788 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.788 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.788 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.048 23:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.048 00:18:05.048 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.048 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.048 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.308 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.308 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.308 23:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:05.308 23:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.308 23:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:05.308 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.308 { 00:18:05.308 "cntlid": 1, 00:18:05.308 "qid": 0, 00:18:05.308 "state": "enabled", 00:18:05.308 "thread": "nvmf_tgt_poll_group_000", 00:18:05.308 "listen_address": { 00:18:05.308 "trtype": "TCP", 00:18:05.308 "adrfam": "IPv4", 00:18:05.308 "traddr": "10.0.0.2", 00:18:05.308 "trsvcid": "4420" 00:18:05.308 }, 00:18:05.308 "peer_address": { 00:18:05.308 "trtype": "TCP", 00:18:05.308 "adrfam": "IPv4", 00:18:05.308 "traddr": "10.0.0.1", 00:18:05.308 "trsvcid": "37108" 00:18:05.308 }, 00:18:05.308 "auth": { 00:18:05.308 "state": "completed", 00:18:05.308 "digest": "sha256", 00:18:05.308 "dhgroup": "null" 00:18:05.308 } 00:18:05.308 } 00:18:05.308 ]' 00:18:05.308 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.308 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.308 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.567 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.567 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.567 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.567 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.567 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.567 23:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:06.504 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.505 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.762 00:18:06.762 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.762 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.762 23:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.020 { 00:18:07.020 "cntlid": 3, 00:18:07.020 "qid": 0, 00:18:07.020 "state": "enabled", 00:18:07.020 "thread": "nvmf_tgt_poll_group_000", 00:18:07.020 "listen_address": { 00:18:07.020 "trtype": "TCP", 00:18:07.020 "adrfam": "IPv4", 00:18:07.020 "traddr": "10.0.0.2", 00:18:07.020 "trsvcid": "4420" 00:18:07.020 }, 00:18:07.020 "peer_address": { 00:18:07.020 "trtype": "TCP", 00:18:07.020 "adrfam": "IPv4", 00:18:07.020 "traddr": "10.0.0.1", 00:18:07.020 "trsvcid": "37140" 00:18:07.020 }, 00:18:07.020 "auth": { 00:18:07.020 "state": "completed", 00:18:07.020 "digest": "sha256", 00:18:07.020 "dhgroup": "null" 00:18:07.020 } 00:18:07.020 } 00:18:07.020 ]' 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.020 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.278 23:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:18:07.844 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.104 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.104 23:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:08.104 23:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.104 23:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:08.104 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.104 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:08.104 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:08.362 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:08.362 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.362 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.362 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:08.362 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:08.362 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.363 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.363 23:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:08.363 23:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.363 23:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:08.363 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.363 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.363 00:18:08.363 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.363 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.363 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.621 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.622 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.622 23:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:08.622 23:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.622 23:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:08.622 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.622 { 00:18:08.622 "cntlid": 5, 00:18:08.622 "qid": 0, 00:18:08.622 "state": "enabled", 00:18:08.622 "thread": "nvmf_tgt_poll_group_000", 00:18:08.622 "listen_address": { 00:18:08.622 "trtype": "TCP", 00:18:08.622 "adrfam": "IPv4", 00:18:08.622 "traddr": "10.0.0.2", 00:18:08.622 "trsvcid": "4420" 00:18:08.622 }, 00:18:08.622 "peer_address": { 00:18:08.622 "trtype": "TCP", 00:18:08.622 "adrfam": "IPv4", 00:18:08.622 "traddr": "10.0.0.1", 00:18:08.622 "trsvcid": "52622" 00:18:08.622 }, 00:18:08.622 "auth": { 00:18:08.622 "state": "completed", 00:18:08.622 "digest": "sha256", 00:18:08.622 "dhgroup": "null" 00:18:08.622 } 00:18:08.622 } 00:18:08.622 ]' 00:18:08.622 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.622 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.622 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.881 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:08.881 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.881 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.881 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.881 23:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.881 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.816 23:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.075 00:18:10.075 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.075 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.075 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.335 { 00:18:10.335 "cntlid": 7, 00:18:10.335 "qid": 0, 00:18:10.335 "state": "enabled", 00:18:10.335 "thread": "nvmf_tgt_poll_group_000", 00:18:10.335 "listen_address": { 00:18:10.335 "trtype": "TCP", 00:18:10.335 "adrfam": "IPv4", 00:18:10.335 "traddr": "10.0.0.2", 00:18:10.335 "trsvcid": "4420" 00:18:10.335 }, 00:18:10.335 "peer_address": { 00:18:10.335 "trtype": "TCP", 00:18:10.335 "adrfam": "IPv4", 00:18:10.335 "traddr": "10.0.0.1", 00:18:10.335 "trsvcid": "52652" 00:18:10.335 }, 00:18:10.335 "auth": { 00:18:10.335 "state": "completed", 00:18:10.335 "digest": "sha256", 00:18:10.335 "dhgroup": "null" 00:18:10.335 } 00:18:10.335 } 00:18:10.335 ]' 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.335 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.594 23:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:18:11.162 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.162 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.162 23:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:11.162 23:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.162 23:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:11.162 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.162 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.162 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:11.162 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.422 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.681 00:18:11.682 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.682 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.682 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.941 { 00:18:11.941 "cntlid": 9, 00:18:11.941 "qid": 0, 00:18:11.941 "state": "enabled", 00:18:11.941 "thread": "nvmf_tgt_poll_group_000", 00:18:11.941 "listen_address": { 00:18:11.941 "trtype": "TCP", 00:18:11.941 "adrfam": "IPv4", 00:18:11.941 "traddr": "10.0.0.2", 00:18:11.941 "trsvcid": "4420" 00:18:11.941 }, 00:18:11.941 "peer_address": { 00:18:11.941 "trtype": "TCP", 00:18:11.941 "adrfam": "IPv4", 00:18:11.941 "traddr": "10.0.0.1", 00:18:11.941 "trsvcid": "52670" 00:18:11.941 }, 00:18:11.941 "auth": { 00:18:11.941 "state": "completed", 00:18:11.941 "digest": "sha256", 00:18:11.941 "dhgroup": "ffdhe2048" 00:18:11.941 } 00:18:11.941 } 00:18:11.941 ]' 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:11.941 23:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.941 23:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.941 23:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.941 23:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.200 23:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:18:12.768 23:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.768 23:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.768 23:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:12.768 23:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.768 23:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:12.768 23:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.768 23:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:12.768 23:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.027 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.286 00:18:13.286 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.286 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.286 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.545 { 00:18:13.545 "cntlid": 11, 00:18:13.545 "qid": 0, 00:18:13.545 "state": "enabled", 00:18:13.545 "thread": "nvmf_tgt_poll_group_000", 00:18:13.545 "listen_address": { 00:18:13.545 "trtype": "TCP", 00:18:13.545 "adrfam": "IPv4", 00:18:13.545 "traddr": "10.0.0.2", 00:18:13.545 "trsvcid": "4420" 00:18:13.545 }, 00:18:13.545 "peer_address": { 00:18:13.545 "trtype": "TCP", 00:18:13.545 "adrfam": "IPv4", 00:18:13.545 "traddr": "10.0.0.1", 00:18:13.545 "trsvcid": "52696" 00:18:13.545 }, 00:18:13.545 "auth": { 00:18:13.545 "state": "completed", 00:18:13.545 "digest": "sha256", 00:18:13.545 "dhgroup": "ffdhe2048" 00:18:13.545 } 00:18:13.545 } 00:18:13.545 ]' 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.545 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.810 23:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:18:14.448 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.448 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:14.448 23:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:14.448 23:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.448 23:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:14.448 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.448 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.448 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.708 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.708 00:18:14.967 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.967 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.967 23:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.967 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.967 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.967 23:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:14.967 23:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.967 23:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:14.967 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.967 { 00:18:14.967 "cntlid": 13, 00:18:14.967 "qid": 0, 00:18:14.967 "state": "enabled", 00:18:14.967 "thread": "nvmf_tgt_poll_group_000", 00:18:14.967 "listen_address": { 00:18:14.967 "trtype": "TCP", 00:18:14.967 "adrfam": "IPv4", 00:18:14.967 "traddr": "10.0.0.2", 00:18:14.967 "trsvcid": "4420" 00:18:14.967 }, 00:18:14.967 "peer_address": { 00:18:14.967 "trtype": "TCP", 00:18:14.967 "adrfam": "IPv4", 00:18:14.967 "traddr": "10.0.0.1", 00:18:14.967 "trsvcid": "52716" 00:18:14.967 }, 00:18:14.967 "auth": { 00:18:14.967 "state": "completed", 00:18:14.967 "digest": "sha256", 00:18:14.967 "dhgroup": "ffdhe2048" 00:18:14.967 } 00:18:14.967 } 00:18:14.967 ]' 00:18:14.967 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.967 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.967 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.226 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.226 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.226 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.226 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.226 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.226 23:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.162 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.421 00:18:16.422 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.422 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.422 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.680 { 00:18:16.680 "cntlid": 15, 00:18:16.680 "qid": 0, 00:18:16.680 "state": "enabled", 00:18:16.680 "thread": "nvmf_tgt_poll_group_000", 00:18:16.680 "listen_address": { 00:18:16.680 "trtype": "TCP", 00:18:16.680 "adrfam": "IPv4", 00:18:16.680 "traddr": "10.0.0.2", 00:18:16.680 "trsvcid": "4420" 00:18:16.680 }, 00:18:16.680 "peer_address": { 00:18:16.680 "trtype": "TCP", 00:18:16.680 "adrfam": "IPv4", 00:18:16.680 "traddr": "10.0.0.1", 00:18:16.680 "trsvcid": "52734" 00:18:16.680 }, 00:18:16.680 "auth": { 00:18:16.680 "state": "completed", 00:18:16.680 "digest": "sha256", 00:18:16.680 "dhgroup": "ffdhe2048" 00:18:16.680 } 00:18:16.680 } 00:18:16.680 ]' 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.680 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.939 23:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:18:17.508 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.508 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:17.508 23:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:17.508 23:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.508 23:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:17.508 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.508 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.508 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.508 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.767 23:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.026 00:18:18.026 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.026 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.026 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.283 { 00:18:18.283 "cntlid": 17, 00:18:18.283 "qid": 0, 00:18:18.283 "state": "enabled", 00:18:18.283 "thread": "nvmf_tgt_poll_group_000", 00:18:18.283 "listen_address": { 00:18:18.283 "trtype": "TCP", 00:18:18.283 "adrfam": "IPv4", 00:18:18.283 "traddr": "10.0.0.2", 00:18:18.283 "trsvcid": "4420" 00:18:18.283 }, 00:18:18.283 "peer_address": { 00:18:18.283 "trtype": "TCP", 00:18:18.283 "adrfam": "IPv4", 00:18:18.283 "traddr": "10.0.0.1", 00:18:18.283 "trsvcid": "52764" 00:18:18.283 }, 00:18:18.283 "auth": { 00:18:18.283 "state": "completed", 00:18:18.283 "digest": "sha256", 00:18:18.283 "dhgroup": "ffdhe3072" 00:18:18.283 } 00:18:18.283 } 00:18:18.283 ]' 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.283 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.540 23:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:18:19.106 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.106 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.106 23:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:19.106 23:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.106 23:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:19.106 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.106 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:19.106 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.365 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.623 00:18:19.623 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.623 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.623 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.880 { 00:18:19.880 "cntlid": 19, 00:18:19.880 "qid": 0, 00:18:19.880 "state": "enabled", 00:18:19.880 "thread": "nvmf_tgt_poll_group_000", 00:18:19.880 "listen_address": { 00:18:19.880 "trtype": "TCP", 00:18:19.880 "adrfam": "IPv4", 00:18:19.880 "traddr": "10.0.0.2", 00:18:19.880 "trsvcid": "4420" 00:18:19.880 }, 00:18:19.880 "peer_address": { 00:18:19.880 "trtype": "TCP", 00:18:19.880 "adrfam": "IPv4", 00:18:19.880 "traddr": "10.0.0.1", 00:18:19.880 "trsvcid": "47584" 00:18:19.880 }, 00:18:19.880 "auth": { 00:18:19.880 "state": "completed", 00:18:19.880 "digest": "sha256", 00:18:19.880 "dhgroup": "ffdhe3072" 00:18:19.880 } 00:18:19.880 } 00:18:19.880 ]' 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.880 23:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.138 23:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:18:20.703 23:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.703 23:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.703 23:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:20.703 23:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.703 23:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:20.703 23:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.703 23:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.703 23:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.961 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.218 00:18:21.218 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.218 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.218 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.476 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.476 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.476 23:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:21.476 23:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.476 23:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:21.476 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.476 { 00:18:21.476 "cntlid": 21, 00:18:21.476 "qid": 0, 00:18:21.476 "state": "enabled", 00:18:21.476 "thread": "nvmf_tgt_poll_group_000", 00:18:21.476 "listen_address": { 00:18:21.476 "trtype": "TCP", 00:18:21.476 "adrfam": "IPv4", 00:18:21.476 "traddr": "10.0.0.2", 00:18:21.476 "trsvcid": "4420" 00:18:21.477 }, 00:18:21.477 "peer_address": { 00:18:21.477 "trtype": "TCP", 00:18:21.477 "adrfam": "IPv4", 00:18:21.477 "traddr": "10.0.0.1", 00:18:21.477 "trsvcid": "47600" 00:18:21.477 }, 00:18:21.477 "auth": { 00:18:21.477 "state": "completed", 00:18:21.477 "digest": "sha256", 00:18:21.477 "dhgroup": "ffdhe3072" 00:18:21.477 } 00:18:21.477 } 00:18:21.477 ]' 00:18:21.477 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.477 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.477 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.477 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.477 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.477 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.477 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.477 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.735 23:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:18:22.301 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.301 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.301 23:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:22.301 23:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.301 23:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:22.301 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.301 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.301 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.559 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.817 00:18:22.817 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.817 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.817 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.817 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.818 23:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.818 23:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:22.818 23:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.818 23:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:23.081 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.081 { 00:18:23.081 "cntlid": 23, 00:18:23.081 "qid": 0, 00:18:23.081 "state": "enabled", 00:18:23.081 "thread": "nvmf_tgt_poll_group_000", 00:18:23.081 "listen_address": { 00:18:23.081 "trtype": "TCP", 00:18:23.081 "adrfam": "IPv4", 00:18:23.081 "traddr": "10.0.0.2", 00:18:23.081 "trsvcid": "4420" 00:18:23.081 }, 00:18:23.081 "peer_address": { 00:18:23.081 "trtype": "TCP", 00:18:23.081 "adrfam": "IPv4", 00:18:23.081 "traddr": "10.0.0.1", 00:18:23.081 "trsvcid": "47628" 00:18:23.081 }, 00:18:23.081 "auth": { 00:18:23.081 "state": "completed", 00:18:23.081 "digest": "sha256", 00:18:23.081 "dhgroup": "ffdhe3072" 00:18:23.081 } 00:18:23.081 } 00:18:23.081 ]' 00:18:23.081 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.081 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.081 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.081 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:23.081 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.081 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.081 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.081 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.339 23:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:18:23.906 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.906 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:23.906 23:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:23.906 23:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.906 23:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:23.906 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.906 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.906 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.906 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.164 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.423 00:18:24.423 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.423 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.423 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.681 { 00:18:24.681 "cntlid": 25, 00:18:24.681 "qid": 0, 00:18:24.681 "state": "enabled", 00:18:24.681 "thread": "nvmf_tgt_poll_group_000", 00:18:24.681 "listen_address": { 00:18:24.681 "trtype": "TCP", 00:18:24.681 "adrfam": "IPv4", 00:18:24.681 "traddr": "10.0.0.2", 00:18:24.681 "trsvcid": "4420" 00:18:24.681 }, 00:18:24.681 "peer_address": { 00:18:24.681 "trtype": "TCP", 00:18:24.681 "adrfam": "IPv4", 00:18:24.681 "traddr": "10.0.0.1", 00:18:24.681 "trsvcid": "47662" 00:18:24.681 }, 00:18:24.681 "auth": { 00:18:24.681 "state": "completed", 00:18:24.681 "digest": "sha256", 00:18:24.681 "dhgroup": "ffdhe4096" 00:18:24.681 } 00:18:24.681 } 00:18:24.681 ]' 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.681 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.940 23:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:18:25.505 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.505 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.505 23:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:25.505 23:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.505 23:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:25.505 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.505 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.505 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.763 23:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.021 00:18:26.021 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.021 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.021 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.279 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.279 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.279 23:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:26.279 23:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.279 23:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:26.279 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.279 { 00:18:26.279 "cntlid": 27, 00:18:26.279 "qid": 0, 00:18:26.279 "state": "enabled", 00:18:26.279 "thread": "nvmf_tgt_poll_group_000", 00:18:26.279 "listen_address": { 00:18:26.279 "trtype": "TCP", 00:18:26.280 "adrfam": "IPv4", 00:18:26.280 "traddr": "10.0.0.2", 00:18:26.280 "trsvcid": "4420" 00:18:26.280 }, 00:18:26.280 "peer_address": { 00:18:26.280 "trtype": "TCP", 00:18:26.280 "adrfam": "IPv4", 00:18:26.280 "traddr": "10.0.0.1", 00:18:26.280 "trsvcid": "47680" 00:18:26.280 }, 00:18:26.280 "auth": { 00:18:26.280 "state": "completed", 00:18:26.280 "digest": "sha256", 00:18:26.280 "dhgroup": "ffdhe4096" 00:18:26.280 } 00:18:26.280 } 00:18:26.280 ]' 00:18:26.280 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.280 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.280 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.280 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:26.280 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.280 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.280 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.280 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.539 23:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:18:27.105 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.105 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.105 23:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:27.105 23:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.105 23:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:27.105 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.105 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.105 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.363 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.621 00:18:27.621 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.621 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.621 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.878 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.878 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.878 23:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:27.878 23:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.878 23:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:27.878 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.878 { 00:18:27.878 "cntlid": 29, 00:18:27.878 "qid": 0, 00:18:27.878 "state": "enabled", 00:18:27.878 "thread": "nvmf_tgt_poll_group_000", 00:18:27.878 "listen_address": { 00:18:27.878 "trtype": "TCP", 00:18:27.878 "adrfam": "IPv4", 00:18:27.878 "traddr": "10.0.0.2", 00:18:27.878 "trsvcid": "4420" 00:18:27.878 }, 00:18:27.878 "peer_address": { 00:18:27.878 "trtype": "TCP", 00:18:27.878 "adrfam": "IPv4", 00:18:27.878 "traddr": "10.0.0.1", 00:18:27.878 "trsvcid": "47710" 00:18:27.878 }, 00:18:27.878 "auth": { 00:18:27.878 "state": "completed", 00:18:27.878 "digest": "sha256", 00:18:27.878 "dhgroup": "ffdhe4096" 00:18:27.878 } 00:18:27.878 } 00:18:27.878 ]' 00:18:27.878 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.878 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.878 23:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.878 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.878 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.878 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.878 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.878 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.138 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:18:28.705 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.705 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:28.705 23:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:28.705 23:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.705 23:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:28.705 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.705 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.705 23:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.965 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.225 00:18:29.225 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.225 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.225 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.486 { 00:18:29.486 "cntlid": 31, 00:18:29.486 "qid": 0, 00:18:29.486 "state": "enabled", 00:18:29.486 "thread": "nvmf_tgt_poll_group_000", 00:18:29.486 "listen_address": { 00:18:29.486 "trtype": "TCP", 00:18:29.486 "adrfam": "IPv4", 00:18:29.486 "traddr": "10.0.0.2", 00:18:29.486 "trsvcid": "4420" 00:18:29.486 }, 00:18:29.486 "peer_address": { 00:18:29.486 "trtype": "TCP", 00:18:29.486 "adrfam": "IPv4", 00:18:29.486 "traddr": "10.0.0.1", 00:18:29.486 "trsvcid": "58928" 00:18:29.486 }, 00:18:29.486 "auth": { 00:18:29.486 "state": "completed", 00:18:29.486 "digest": "sha256", 00:18:29.486 "dhgroup": "ffdhe4096" 00:18:29.486 } 00:18:29.486 } 00:18:29.486 ]' 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.486 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.746 23:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:18:30.314 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.314 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:30.314 23:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:30.314 23:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.314 23:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:30.314 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.314 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.314 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:30.314 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.574 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.832 00:18:30.832 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.832 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.832 23:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.090 { 00:18:31.090 "cntlid": 33, 00:18:31.090 "qid": 0, 00:18:31.090 "state": "enabled", 00:18:31.090 "thread": "nvmf_tgt_poll_group_000", 00:18:31.090 "listen_address": { 00:18:31.090 "trtype": "TCP", 00:18:31.090 "adrfam": "IPv4", 00:18:31.090 "traddr": "10.0.0.2", 00:18:31.090 "trsvcid": "4420" 00:18:31.090 }, 00:18:31.090 "peer_address": { 00:18:31.090 "trtype": "TCP", 00:18:31.090 "adrfam": "IPv4", 00:18:31.090 "traddr": "10.0.0.1", 00:18:31.090 "trsvcid": "58958" 00:18:31.090 }, 00:18:31.090 "auth": { 00:18:31.090 "state": "completed", 00:18:31.090 "digest": "sha256", 00:18:31.090 "dhgroup": "ffdhe6144" 00:18:31.090 } 00:18:31.090 } 00:18:31.090 ]' 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:31.090 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.375 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.375 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.375 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.375 23:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.309 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.568 00:18:32.568 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.568 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.568 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.826 { 00:18:32.826 "cntlid": 35, 00:18:32.826 "qid": 0, 00:18:32.826 "state": "enabled", 00:18:32.826 "thread": "nvmf_tgt_poll_group_000", 00:18:32.826 "listen_address": { 00:18:32.826 "trtype": "TCP", 00:18:32.826 "adrfam": "IPv4", 00:18:32.826 "traddr": "10.0.0.2", 00:18:32.826 "trsvcid": "4420" 00:18:32.826 }, 00:18:32.826 "peer_address": { 00:18:32.826 "trtype": "TCP", 00:18:32.826 "adrfam": "IPv4", 00:18:32.826 "traddr": "10.0.0.1", 00:18:32.826 "trsvcid": "58994" 00:18:32.826 }, 00:18:32.826 "auth": { 00:18:32.826 "state": "completed", 00:18:32.826 "digest": "sha256", 00:18:32.826 "dhgroup": "ffdhe6144" 00:18:32.826 } 00:18:32.826 } 00:18:32.826 ]' 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.826 23:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.084 23:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:18:33.649 23:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.908 23:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.908 23:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:33.908 23:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.908 23:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:33.908 23:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.908 23:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.908 23:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.908 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.166 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.427 { 00:18:34.427 "cntlid": 37, 00:18:34.427 "qid": 0, 00:18:34.427 "state": "enabled", 00:18:34.427 "thread": "nvmf_tgt_poll_group_000", 00:18:34.427 "listen_address": { 00:18:34.427 "trtype": "TCP", 00:18:34.427 "adrfam": "IPv4", 00:18:34.427 "traddr": "10.0.0.2", 00:18:34.427 "trsvcid": "4420" 00:18:34.427 }, 00:18:34.427 "peer_address": { 00:18:34.427 "trtype": "TCP", 00:18:34.427 "adrfam": "IPv4", 00:18:34.427 "traddr": "10.0.0.1", 00:18:34.427 "trsvcid": "59022" 00:18:34.427 }, 00:18:34.427 "auth": { 00:18:34.427 "state": "completed", 00:18:34.427 "digest": "sha256", 00:18:34.427 "dhgroup": "ffdhe6144" 00:18:34.427 } 00:18:34.427 } 00:18:34.427 ]' 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.427 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.685 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.685 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.685 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.685 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.685 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.685 23:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.641 23:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.899 00:18:35.899 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.899 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.899 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.157 { 00:18:36.157 "cntlid": 39, 00:18:36.157 "qid": 0, 00:18:36.157 "state": "enabled", 00:18:36.157 "thread": "nvmf_tgt_poll_group_000", 00:18:36.157 "listen_address": { 00:18:36.157 "trtype": "TCP", 00:18:36.157 "adrfam": "IPv4", 00:18:36.157 "traddr": "10.0.0.2", 00:18:36.157 "trsvcid": "4420" 00:18:36.157 }, 00:18:36.157 "peer_address": { 00:18:36.157 "trtype": "TCP", 00:18:36.157 "adrfam": "IPv4", 00:18:36.157 "traddr": "10.0.0.1", 00:18:36.157 "trsvcid": "59046" 00:18:36.157 }, 00:18:36.157 "auth": { 00:18:36.157 "state": "completed", 00:18:36.157 "digest": "sha256", 00:18:36.157 "dhgroup": "ffdhe6144" 00:18:36.157 } 00:18:36.157 } 00:18:36.157 ]' 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.157 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.416 23:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:18:36.983 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.983 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.983 23:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:36.983 23:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.983 23:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:36.983 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.983 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.983 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:36.983 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.242 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.809 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.809 { 00:18:37.809 "cntlid": 41, 00:18:37.809 "qid": 0, 00:18:37.809 "state": "enabled", 00:18:37.809 "thread": "nvmf_tgt_poll_group_000", 00:18:37.809 "listen_address": { 00:18:37.809 "trtype": "TCP", 00:18:37.809 "adrfam": "IPv4", 00:18:37.809 "traddr": "10.0.0.2", 00:18:37.809 "trsvcid": "4420" 00:18:37.809 }, 00:18:37.809 "peer_address": { 00:18:37.809 "trtype": "TCP", 00:18:37.809 "adrfam": "IPv4", 00:18:37.809 "traddr": "10.0.0.1", 00:18:37.809 "trsvcid": "59060" 00:18:37.809 }, 00:18:37.809 "auth": { 00:18:37.809 "state": "completed", 00:18:37.809 "digest": "sha256", 00:18:37.809 "dhgroup": "ffdhe8192" 00:18:37.809 } 00:18:37.809 } 00:18:37.809 ]' 00:18:37.809 23:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.067 23:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.067 23:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.067 23:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.067 23:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.067 23:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.067 23:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.067 23:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.325 23:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:18:38.892 23:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.892 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.892 23:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:38.892 23:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.892 23:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:38.892 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.892 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.892 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.151 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.720 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.720 { 00:18:39.720 "cntlid": 43, 00:18:39.720 "qid": 0, 00:18:39.720 "state": "enabled", 00:18:39.720 "thread": "nvmf_tgt_poll_group_000", 00:18:39.720 "listen_address": { 00:18:39.720 "trtype": "TCP", 00:18:39.720 "adrfam": "IPv4", 00:18:39.720 "traddr": "10.0.0.2", 00:18:39.720 "trsvcid": "4420" 00:18:39.720 }, 00:18:39.720 "peer_address": { 00:18:39.720 "trtype": "TCP", 00:18:39.720 "adrfam": "IPv4", 00:18:39.720 "traddr": "10.0.0.1", 00:18:39.720 "trsvcid": "38698" 00:18:39.720 }, 00:18:39.720 "auth": { 00:18:39.720 "state": "completed", 00:18:39.720 "digest": "sha256", 00:18:39.720 "dhgroup": "ffdhe8192" 00:18:39.720 } 00:18:39.720 } 00:18:39.720 ]' 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.720 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.979 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:39.979 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.979 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.979 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.979 23:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.979 23:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:18:40.957 23:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.957 23:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.957 23:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:40.957 23:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.957 23:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:40.957 23:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.957 23:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:40.957 23:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.957 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.574 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.574 { 00:18:41.574 "cntlid": 45, 00:18:41.574 "qid": 0, 00:18:41.574 "state": "enabled", 00:18:41.574 "thread": "nvmf_tgt_poll_group_000", 00:18:41.574 "listen_address": { 00:18:41.574 "trtype": "TCP", 00:18:41.574 "adrfam": "IPv4", 00:18:41.574 "traddr": "10.0.0.2", 00:18:41.574 "trsvcid": "4420" 00:18:41.574 }, 00:18:41.574 "peer_address": { 00:18:41.574 "trtype": "TCP", 00:18:41.574 "adrfam": "IPv4", 00:18:41.574 "traddr": "10.0.0.1", 00:18:41.574 "trsvcid": "38734" 00:18:41.574 }, 00:18:41.574 "auth": { 00:18:41.574 "state": "completed", 00:18:41.574 "digest": "sha256", 00:18:41.574 "dhgroup": "ffdhe8192" 00:18:41.574 } 00:18:41.574 } 00:18:41.574 ]' 00:18:41.574 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.835 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.835 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.835 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:41.835 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.835 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.835 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.835 23:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.835 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.777 23:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.348 00:18:43.348 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.348 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.348 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.608 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.608 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.608 23:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:43.608 23:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.608 23:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:43.609 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.609 { 00:18:43.609 "cntlid": 47, 00:18:43.609 "qid": 0, 00:18:43.609 "state": "enabled", 00:18:43.609 "thread": "nvmf_tgt_poll_group_000", 00:18:43.609 "listen_address": { 00:18:43.609 "trtype": "TCP", 00:18:43.609 "adrfam": "IPv4", 00:18:43.609 "traddr": "10.0.0.2", 00:18:43.609 "trsvcid": "4420" 00:18:43.609 }, 00:18:43.609 "peer_address": { 00:18:43.609 "trtype": "TCP", 00:18:43.609 "adrfam": "IPv4", 00:18:43.609 "traddr": "10.0.0.1", 00:18:43.609 "trsvcid": "38764" 00:18:43.609 }, 00:18:43.609 "auth": { 00:18:43.609 "state": "completed", 00:18:43.609 "digest": "sha256", 00:18:43.609 "dhgroup": "ffdhe8192" 00:18:43.609 } 00:18:43.609 } 00:18:43.609 ]' 00:18:43.609 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.609 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.609 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.609 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.609 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.609 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.609 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.609 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.869 23:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:18:44.439 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.440 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.440 23:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:44.440 23:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.440 23:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:44.440 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:44.440 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.440 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.440 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.440 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.700 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.960 00:18:44.960 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.960 23:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.960 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.220 { 00:18:45.220 "cntlid": 49, 00:18:45.220 "qid": 0, 00:18:45.220 "state": "enabled", 00:18:45.220 "thread": "nvmf_tgt_poll_group_000", 00:18:45.220 "listen_address": { 00:18:45.220 "trtype": "TCP", 00:18:45.220 "adrfam": "IPv4", 00:18:45.220 "traddr": "10.0.0.2", 00:18:45.220 "trsvcid": "4420" 00:18:45.220 }, 00:18:45.220 "peer_address": { 00:18:45.220 "trtype": "TCP", 00:18:45.220 "adrfam": "IPv4", 00:18:45.220 "traddr": "10.0.0.1", 00:18:45.220 "trsvcid": "38794" 00:18:45.220 }, 00:18:45.220 "auth": { 00:18:45.220 "state": "completed", 00:18:45.220 "digest": "sha384", 00:18:45.220 "dhgroup": "null" 00:18:45.220 } 00:18:45.220 } 00:18:45.220 ]' 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.220 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.479 23:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:18:46.049 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.049 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.049 23:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:46.049 23:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.049 23:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:46.049 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.049 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.049 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.309 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.568 00:18:46.568 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.568 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.568 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.568 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.568 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.568 23:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:46.568 23:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.568 23:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:46.568 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.568 { 00:18:46.568 "cntlid": 51, 00:18:46.568 "qid": 0, 00:18:46.569 "state": "enabled", 00:18:46.569 "thread": "nvmf_tgt_poll_group_000", 00:18:46.569 "listen_address": { 00:18:46.569 "trtype": "TCP", 00:18:46.569 "adrfam": "IPv4", 00:18:46.569 "traddr": "10.0.0.2", 00:18:46.569 "trsvcid": "4420" 00:18:46.569 }, 00:18:46.569 "peer_address": { 00:18:46.569 "trtype": "TCP", 00:18:46.569 "adrfam": "IPv4", 00:18:46.569 "traddr": "10.0.0.1", 00:18:46.569 "trsvcid": "38806" 00:18:46.569 }, 00:18:46.569 "auth": { 00:18:46.569 "state": "completed", 00:18:46.569 "digest": "sha384", 00:18:46.569 "dhgroup": "null" 00:18:46.569 } 00:18:46.569 } 00:18:46.569 ]' 00:18:46.569 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.828 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.828 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.828 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:46.828 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.828 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.828 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.828 23:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.088 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:47.657 23:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.918 23:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:47.918 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.918 23:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.918 00:18:47.918 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.918 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.918 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.178 { 00:18:48.178 "cntlid": 53, 00:18:48.178 "qid": 0, 00:18:48.178 "state": "enabled", 00:18:48.178 "thread": "nvmf_tgt_poll_group_000", 00:18:48.178 "listen_address": { 00:18:48.178 "trtype": "TCP", 00:18:48.178 "adrfam": "IPv4", 00:18:48.178 "traddr": "10.0.0.2", 00:18:48.178 "trsvcid": "4420" 00:18:48.178 }, 00:18:48.178 "peer_address": { 00:18:48.178 "trtype": "TCP", 00:18:48.178 "adrfam": "IPv4", 00:18:48.178 "traddr": "10.0.0.1", 00:18:48.178 "trsvcid": "38832" 00:18:48.178 }, 00:18:48.178 "auth": { 00:18:48.178 "state": "completed", 00:18:48.178 "digest": "sha384", 00:18:48.178 "dhgroup": "null" 00:18:48.178 } 00:18:48.178 } 00:18:48.178 ]' 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:48.178 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.437 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.437 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.438 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.438 23:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.378 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.639 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.639 { 00:18:49.639 "cntlid": 55, 00:18:49.639 "qid": 0, 00:18:49.639 "state": "enabled", 00:18:49.639 "thread": "nvmf_tgt_poll_group_000", 00:18:49.639 "listen_address": { 00:18:49.639 "trtype": "TCP", 00:18:49.639 "adrfam": "IPv4", 00:18:49.639 "traddr": "10.0.0.2", 00:18:49.639 "trsvcid": "4420" 00:18:49.639 }, 00:18:49.639 "peer_address": { 00:18:49.639 "trtype": "TCP", 00:18:49.639 "adrfam": "IPv4", 00:18:49.639 "traddr": "10.0.0.1", 00:18:49.639 "trsvcid": "36518" 00:18:49.639 }, 00:18:49.639 "auth": { 00:18:49.639 "state": "completed", 00:18:49.639 "digest": "sha384", 00:18:49.639 "dhgroup": "null" 00:18:49.639 } 00:18:49.639 } 00:18:49.639 ]' 00:18:49.639 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.899 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.899 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.899 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:49.900 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.900 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.900 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.900 23:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.160 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:18:50.731 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.731 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:50.731 23:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:50.731 23:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.731 23:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:50.731 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.731 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.731 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.731 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.992 23:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.252 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.252 { 00:18:51.252 "cntlid": 57, 00:18:51.252 "qid": 0, 00:18:51.252 "state": "enabled", 00:18:51.252 "thread": "nvmf_tgt_poll_group_000", 00:18:51.252 "listen_address": { 00:18:51.252 "trtype": "TCP", 00:18:51.252 "adrfam": "IPv4", 00:18:51.252 "traddr": "10.0.0.2", 00:18:51.252 "trsvcid": "4420" 00:18:51.252 }, 00:18:51.252 "peer_address": { 00:18:51.252 "trtype": "TCP", 00:18:51.252 "adrfam": "IPv4", 00:18:51.252 "traddr": "10.0.0.1", 00:18:51.252 "trsvcid": "36558" 00:18:51.252 }, 00:18:51.252 "auth": { 00:18:51.252 "state": "completed", 00:18:51.252 "digest": "sha384", 00:18:51.252 "dhgroup": "ffdhe2048" 00:18:51.252 } 00:18:51.252 } 00:18:51.252 ]' 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.252 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.512 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.512 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.512 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.512 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.512 23:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.452 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.712 00:18:52.712 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.712 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.712 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.973 { 00:18:52.973 "cntlid": 59, 00:18:52.973 "qid": 0, 00:18:52.973 "state": "enabled", 00:18:52.973 "thread": "nvmf_tgt_poll_group_000", 00:18:52.973 "listen_address": { 00:18:52.973 "trtype": "TCP", 00:18:52.973 "adrfam": "IPv4", 00:18:52.973 "traddr": "10.0.0.2", 00:18:52.973 "trsvcid": "4420" 00:18:52.973 }, 00:18:52.973 "peer_address": { 00:18:52.973 "trtype": "TCP", 00:18:52.973 "adrfam": "IPv4", 00:18:52.973 "traddr": "10.0.0.1", 00:18:52.973 "trsvcid": "36588" 00:18:52.973 }, 00:18:52.973 "auth": { 00:18:52.973 "state": "completed", 00:18:52.973 "digest": "sha384", 00:18:52.973 "dhgroup": "ffdhe2048" 00:18:52.973 } 00:18:52.973 } 00:18:52.973 ]' 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.973 23:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.973 23:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.973 23:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.973 23:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.233 23:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:18:53.804 23:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.804 23:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.804 23:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:53.804 23:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.804 23:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:53.804 23:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.804 23:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:53.804 23:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.064 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.064 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.324 { 00:18:54.324 "cntlid": 61, 00:18:54.324 "qid": 0, 00:18:54.324 "state": "enabled", 00:18:54.324 "thread": "nvmf_tgt_poll_group_000", 00:18:54.324 "listen_address": { 00:18:54.324 "trtype": "TCP", 00:18:54.324 "adrfam": "IPv4", 00:18:54.324 "traddr": "10.0.0.2", 00:18:54.324 "trsvcid": "4420" 00:18:54.324 }, 00:18:54.324 "peer_address": { 00:18:54.324 "trtype": "TCP", 00:18:54.324 "adrfam": "IPv4", 00:18:54.324 "traddr": "10.0.0.1", 00:18:54.324 "trsvcid": "36620" 00:18:54.324 }, 00:18:54.324 "auth": { 00:18:54.324 "state": "completed", 00:18:54.324 "digest": "sha384", 00:18:54.324 "dhgroup": "ffdhe2048" 00:18:54.324 } 00:18:54.324 } 00:18:54.324 ]' 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.324 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.583 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.583 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.583 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.583 23:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.524 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.784 00:18:55.784 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.784 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.784 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.045 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.045 23:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.045 23:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:56.045 23:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.045 { 00:18:56.045 "cntlid": 63, 00:18:56.045 "qid": 0, 00:18:56.045 "state": "enabled", 00:18:56.045 "thread": "nvmf_tgt_poll_group_000", 00:18:56.045 "listen_address": { 00:18:56.045 "trtype": "TCP", 00:18:56.045 "adrfam": "IPv4", 00:18:56.045 "traddr": "10.0.0.2", 00:18:56.045 "trsvcid": "4420" 00:18:56.045 }, 00:18:56.045 "peer_address": { 00:18:56.045 "trtype": "TCP", 00:18:56.045 "adrfam": "IPv4", 00:18:56.045 "traddr": "10.0.0.1", 00:18:56.045 "trsvcid": "36642" 00:18:56.045 }, 00:18:56.045 "auth": { 00:18:56.045 "state": "completed", 00:18:56.045 "digest": "sha384", 00:18:56.045 "dhgroup": "ffdhe2048" 00:18:56.045 } 00:18:56.045 } 00:18:56.045 ]' 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.045 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.305 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:18:56.876 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.876 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.876 23:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:56.876 23:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.876 23:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:56.876 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.876 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.876 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:56.876 23:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.136 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.395 00:18:57.395 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.396 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.396 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.396 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.396 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.396 23:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:57.396 23:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.396 23:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:57.396 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.396 { 00:18:57.396 "cntlid": 65, 00:18:57.396 "qid": 0, 00:18:57.396 "state": "enabled", 00:18:57.396 "thread": "nvmf_tgt_poll_group_000", 00:18:57.396 "listen_address": { 00:18:57.396 "trtype": "TCP", 00:18:57.396 "adrfam": "IPv4", 00:18:57.396 "traddr": "10.0.0.2", 00:18:57.396 "trsvcid": "4420" 00:18:57.396 }, 00:18:57.396 "peer_address": { 00:18:57.396 "trtype": "TCP", 00:18:57.396 "adrfam": "IPv4", 00:18:57.396 "traddr": "10.0.0.1", 00:18:57.396 "trsvcid": "36658" 00:18:57.396 }, 00:18:57.396 "auth": { 00:18:57.396 "state": "completed", 00:18:57.396 "digest": "sha384", 00:18:57.396 "dhgroup": "ffdhe3072" 00:18:57.396 } 00:18:57.396 } 00:18:57.396 ]' 00:18:57.396 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.655 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.655 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.655 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.655 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.655 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.655 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.655 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.655 23:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.601 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.860 00:18:58.860 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.860 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.860 23:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.119 { 00:18:59.119 "cntlid": 67, 00:18:59.119 "qid": 0, 00:18:59.119 "state": "enabled", 00:18:59.119 "thread": "nvmf_tgt_poll_group_000", 00:18:59.119 "listen_address": { 00:18:59.119 "trtype": "TCP", 00:18:59.119 "adrfam": "IPv4", 00:18:59.119 "traddr": "10.0.0.2", 00:18:59.119 "trsvcid": "4420" 00:18:59.119 }, 00:18:59.119 "peer_address": { 00:18:59.119 "trtype": "TCP", 00:18:59.119 "adrfam": "IPv4", 00:18:59.119 "traddr": "10.0.0.1", 00:18:59.119 "trsvcid": "47790" 00:18:59.119 }, 00:18:59.119 "auth": { 00:18:59.119 "state": "completed", 00:18:59.119 "digest": "sha384", 00:18:59.119 "dhgroup": "ffdhe3072" 00:18:59.119 } 00:18:59.119 } 00:18:59.119 ]' 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.119 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.379 23:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:18:59.950 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.950 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.950 23:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.950 23:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.950 23:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.950 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.950 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.950 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.210 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.471 00:19:00.471 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.471 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.471 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.732 { 00:19:00.732 "cntlid": 69, 00:19:00.732 "qid": 0, 00:19:00.732 "state": "enabled", 00:19:00.732 "thread": "nvmf_tgt_poll_group_000", 00:19:00.732 "listen_address": { 00:19:00.732 "trtype": "TCP", 00:19:00.732 "adrfam": "IPv4", 00:19:00.732 "traddr": "10.0.0.2", 00:19:00.732 "trsvcid": "4420" 00:19:00.732 }, 00:19:00.732 "peer_address": { 00:19:00.732 "trtype": "TCP", 00:19:00.732 "adrfam": "IPv4", 00:19:00.732 "traddr": "10.0.0.1", 00:19:00.732 "trsvcid": "47812" 00:19:00.732 }, 00:19:00.732 "auth": { 00:19:00.732 "state": "completed", 00:19:00.732 "digest": "sha384", 00:19:00.732 "dhgroup": "ffdhe3072" 00:19:00.732 } 00:19:00.732 } 00:19:00.732 ]' 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.732 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.733 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.994 23:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:19:01.565 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.566 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:01.566 23:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.566 23:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.566 23:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.566 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.566 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.566 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.826 23:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.087 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.087 { 00:19:02.087 "cntlid": 71, 00:19:02.087 "qid": 0, 00:19:02.087 "state": "enabled", 00:19:02.087 "thread": "nvmf_tgt_poll_group_000", 00:19:02.087 "listen_address": { 00:19:02.087 "trtype": "TCP", 00:19:02.087 "adrfam": "IPv4", 00:19:02.087 "traddr": "10.0.0.2", 00:19:02.087 "trsvcid": "4420" 00:19:02.087 }, 00:19:02.087 "peer_address": { 00:19:02.087 "trtype": "TCP", 00:19:02.087 "adrfam": "IPv4", 00:19:02.087 "traddr": "10.0.0.1", 00:19:02.087 "trsvcid": "47846" 00:19:02.087 }, 00:19:02.087 "auth": { 00:19:02.087 "state": "completed", 00:19:02.087 "digest": "sha384", 00:19:02.087 "dhgroup": "ffdhe3072" 00:19:02.087 } 00:19:02.087 } 00:19:02.087 ]' 00:19:02.087 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.348 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.348 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.348 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.348 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.348 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.348 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.348 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.608 23:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:19:03.181 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.181 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:03.181 23:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:03.181 23:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.181 23:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:03.181 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.181 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.181 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.182 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.442 00:19:03.442 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.442 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.442 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.703 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.703 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.703 23:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:03.704 23:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.704 23:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:03.704 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.704 { 00:19:03.704 "cntlid": 73, 00:19:03.704 "qid": 0, 00:19:03.704 "state": "enabled", 00:19:03.704 "thread": "nvmf_tgt_poll_group_000", 00:19:03.704 "listen_address": { 00:19:03.704 "trtype": "TCP", 00:19:03.704 "adrfam": "IPv4", 00:19:03.704 "traddr": "10.0.0.2", 00:19:03.704 "trsvcid": "4420" 00:19:03.704 }, 00:19:03.704 "peer_address": { 00:19:03.704 "trtype": "TCP", 00:19:03.704 "adrfam": "IPv4", 00:19:03.704 "traddr": "10.0.0.1", 00:19:03.704 "trsvcid": "47876" 00:19:03.704 }, 00:19:03.704 "auth": { 00:19:03.704 "state": "completed", 00:19:03.704 "digest": "sha384", 00:19:03.704 "dhgroup": "ffdhe4096" 00:19:03.704 } 00:19:03.704 } 00:19:03.704 ]' 00:19:03.704 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.704 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.704 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.704 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.704 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.965 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.965 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.965 23:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.965 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.908 23:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.169 00:19:05.169 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.169 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.169 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.430 { 00:19:05.430 "cntlid": 75, 00:19:05.430 "qid": 0, 00:19:05.430 "state": "enabled", 00:19:05.430 "thread": "nvmf_tgt_poll_group_000", 00:19:05.430 "listen_address": { 00:19:05.430 "trtype": "TCP", 00:19:05.430 "adrfam": "IPv4", 00:19:05.430 "traddr": "10.0.0.2", 00:19:05.430 "trsvcid": "4420" 00:19:05.430 }, 00:19:05.430 "peer_address": { 00:19:05.430 "trtype": "TCP", 00:19:05.430 "adrfam": "IPv4", 00:19:05.430 "traddr": "10.0.0.1", 00:19:05.430 "trsvcid": "47910" 00:19:05.430 }, 00:19:05.430 "auth": { 00:19:05.430 "state": "completed", 00:19:05.430 "digest": "sha384", 00:19:05.430 "dhgroup": "ffdhe4096" 00:19:05.430 } 00:19:05.430 } 00:19:05.430 ]' 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.430 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.691 23:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:19:06.263 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.263 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:06.263 23:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:06.263 23:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.264 23:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:06.264 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.264 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.264 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.525 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.785 00:19:06.785 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.785 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.785 23:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.050 { 00:19:07.050 "cntlid": 77, 00:19:07.050 "qid": 0, 00:19:07.050 "state": "enabled", 00:19:07.050 "thread": "nvmf_tgt_poll_group_000", 00:19:07.050 "listen_address": { 00:19:07.050 "trtype": "TCP", 00:19:07.050 "adrfam": "IPv4", 00:19:07.050 "traddr": "10.0.0.2", 00:19:07.050 "trsvcid": "4420" 00:19:07.050 }, 00:19:07.050 "peer_address": { 00:19:07.050 "trtype": "TCP", 00:19:07.050 "adrfam": "IPv4", 00:19:07.050 "traddr": "10.0.0.1", 00:19:07.050 "trsvcid": "47942" 00:19:07.050 }, 00:19:07.050 "auth": { 00:19:07.050 "state": "completed", 00:19:07.050 "digest": "sha384", 00:19:07.050 "dhgroup": "ffdhe4096" 00:19:07.050 } 00:19:07.050 } 00:19:07.050 ]' 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.050 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.361 23:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:19:07.956 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.956 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.956 23:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:07.956 23:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.956 23:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:07.956 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.956 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.956 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.218 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.480 00:19:08.480 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.480 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.480 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.480 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.480 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.480 23:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:08.480 23:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.741 { 00:19:08.741 "cntlid": 79, 00:19:08.741 "qid": 0, 00:19:08.741 "state": "enabled", 00:19:08.741 "thread": "nvmf_tgt_poll_group_000", 00:19:08.741 "listen_address": { 00:19:08.741 "trtype": "TCP", 00:19:08.741 "adrfam": "IPv4", 00:19:08.741 "traddr": "10.0.0.2", 00:19:08.741 "trsvcid": "4420" 00:19:08.741 }, 00:19:08.741 "peer_address": { 00:19:08.741 "trtype": "TCP", 00:19:08.741 "adrfam": "IPv4", 00:19:08.741 "traddr": "10.0.0.1", 00:19:08.741 "trsvcid": "56334" 00:19:08.741 }, 00:19:08.741 "auth": { 00:19:08.741 "state": "completed", 00:19:08.741 "digest": "sha384", 00:19:08.741 "dhgroup": "ffdhe4096" 00:19:08.741 } 00:19:08.741 } 00:19:08.741 ]' 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.741 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.001 23:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:19:09.572 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.572 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.572 23:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:09.572 23:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.572 23:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:09.572 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.572 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.572 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.572 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.833 23:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.094 00:19:10.094 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.094 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.094 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.355 { 00:19:10.355 "cntlid": 81, 00:19:10.355 "qid": 0, 00:19:10.355 "state": "enabled", 00:19:10.355 "thread": "nvmf_tgt_poll_group_000", 00:19:10.355 "listen_address": { 00:19:10.355 "trtype": "TCP", 00:19:10.355 "adrfam": "IPv4", 00:19:10.355 "traddr": "10.0.0.2", 00:19:10.355 "trsvcid": "4420" 00:19:10.355 }, 00:19:10.355 "peer_address": { 00:19:10.355 "trtype": "TCP", 00:19:10.355 "adrfam": "IPv4", 00:19:10.355 "traddr": "10.0.0.1", 00:19:10.355 "trsvcid": "56364" 00:19:10.355 }, 00:19:10.355 "auth": { 00:19:10.355 "state": "completed", 00:19:10.355 "digest": "sha384", 00:19:10.355 "dhgroup": "ffdhe6144" 00:19:10.355 } 00:19:10.355 } 00:19:10.355 ]' 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.355 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.617 23:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:19:11.188 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.188 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:11.188 23:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:11.188 23:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.188 23:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:11.188 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.188 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.188 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.449 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.710 00:19:11.710 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.710 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.710 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.972 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.972 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.972 23:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:11.972 23:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.972 23:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:11.972 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.972 { 00:19:11.972 "cntlid": 83, 00:19:11.972 "qid": 0, 00:19:11.972 "state": "enabled", 00:19:11.972 "thread": "nvmf_tgt_poll_group_000", 00:19:11.972 "listen_address": { 00:19:11.972 "trtype": "TCP", 00:19:11.972 "adrfam": "IPv4", 00:19:11.972 "traddr": "10.0.0.2", 00:19:11.972 "trsvcid": "4420" 00:19:11.972 }, 00:19:11.972 "peer_address": { 00:19:11.972 "trtype": "TCP", 00:19:11.972 "adrfam": "IPv4", 00:19:11.972 "traddr": "10.0.0.1", 00:19:11.972 "trsvcid": "56380" 00:19:11.972 }, 00:19:11.972 "auth": { 00:19:11.972 "state": "completed", 00:19:11.972 "digest": "sha384", 00:19:11.972 "dhgroup": "ffdhe6144" 00:19:11.972 } 00:19:11.972 } 00:19:11.972 ]' 00:19:11.972 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.972 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.972 23:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.972 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.972 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.972 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.972 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.972 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.233 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:19:12.805 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.805 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:12.805 23:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:12.805 23:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.805 23:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:12.805 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.805 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.805 23:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.065 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.325 00:19:13.325 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.325 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.325 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.587 { 00:19:13.587 "cntlid": 85, 00:19:13.587 "qid": 0, 00:19:13.587 "state": "enabled", 00:19:13.587 "thread": "nvmf_tgt_poll_group_000", 00:19:13.587 "listen_address": { 00:19:13.587 "trtype": "TCP", 00:19:13.587 "adrfam": "IPv4", 00:19:13.587 "traddr": "10.0.0.2", 00:19:13.587 "trsvcid": "4420" 00:19:13.587 }, 00:19:13.587 "peer_address": { 00:19:13.587 "trtype": "TCP", 00:19:13.587 "adrfam": "IPv4", 00:19:13.587 "traddr": "10.0.0.1", 00:19:13.587 "trsvcid": "56402" 00:19:13.587 }, 00:19:13.587 "auth": { 00:19:13.587 "state": "completed", 00:19:13.587 "digest": "sha384", 00:19:13.587 "dhgroup": "ffdhe6144" 00:19:13.587 } 00:19:13.587 } 00:19:13.587 ]' 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.587 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.848 23:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.791 23:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:15.052 00:19:15.052 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.052 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.052 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.312 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.313 { 00:19:15.313 "cntlid": 87, 00:19:15.313 "qid": 0, 00:19:15.313 "state": "enabled", 00:19:15.313 "thread": "nvmf_tgt_poll_group_000", 00:19:15.313 "listen_address": { 00:19:15.313 "trtype": "TCP", 00:19:15.313 "adrfam": "IPv4", 00:19:15.313 "traddr": "10.0.0.2", 00:19:15.313 "trsvcid": "4420" 00:19:15.313 }, 00:19:15.313 "peer_address": { 00:19:15.313 "trtype": "TCP", 00:19:15.313 "adrfam": "IPv4", 00:19:15.313 "traddr": "10.0.0.1", 00:19:15.313 "trsvcid": "56418" 00:19:15.313 }, 00:19:15.313 "auth": { 00:19:15.313 "state": "completed", 00:19:15.313 "digest": "sha384", 00:19:15.313 "dhgroup": "ffdhe6144" 00:19:15.313 } 00:19:15.313 } 00:19:15.313 ]' 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.313 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.573 23:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:19:16.144 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.144 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.144 23:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:16.144 23:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.144 23:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:16.144 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.144 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.144 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.144 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.406 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.977 00:19:16.977 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.977 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.977 23:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.977 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.977 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.977 23:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:16.977 23:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.977 23:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:16.977 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.977 { 00:19:16.977 "cntlid": 89, 00:19:16.977 "qid": 0, 00:19:16.977 "state": "enabled", 00:19:16.977 "thread": "nvmf_tgt_poll_group_000", 00:19:16.977 "listen_address": { 00:19:16.977 "trtype": "TCP", 00:19:16.977 "adrfam": "IPv4", 00:19:16.977 "traddr": "10.0.0.2", 00:19:16.977 "trsvcid": "4420" 00:19:16.977 }, 00:19:16.977 "peer_address": { 00:19:16.977 "trtype": "TCP", 00:19:16.977 "adrfam": "IPv4", 00:19:16.977 "traddr": "10.0.0.1", 00:19:16.977 "trsvcid": "56428" 00:19:16.977 }, 00:19:16.977 "auth": { 00:19:16.977 "state": "completed", 00:19:16.977 "digest": "sha384", 00:19:16.977 "dhgroup": "ffdhe8192" 00:19:16.977 } 00:19:16.977 } 00:19:16.977 ]' 00:19:16.977 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.238 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.238 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.238 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.238 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.238 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.238 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.238 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.499 23:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:18.068 23:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.328 23:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:18.328 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.328 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.587 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.845 { 00:19:18.845 "cntlid": 91, 00:19:18.845 "qid": 0, 00:19:18.845 "state": "enabled", 00:19:18.845 "thread": "nvmf_tgt_poll_group_000", 00:19:18.845 "listen_address": { 00:19:18.845 "trtype": "TCP", 00:19:18.845 "adrfam": "IPv4", 00:19:18.845 "traddr": "10.0.0.2", 00:19:18.845 "trsvcid": "4420" 00:19:18.845 }, 00:19:18.845 "peer_address": { 00:19:18.845 "trtype": "TCP", 00:19:18.845 "adrfam": "IPv4", 00:19:18.845 "traddr": "10.0.0.1", 00:19:18.845 "trsvcid": "50074" 00:19:18.845 }, 00:19:18.845 "auth": { 00:19:18.845 "state": "completed", 00:19:18.845 "digest": "sha384", 00:19:18.845 "dhgroup": "ffdhe8192" 00:19:18.845 } 00:19:18.845 } 00:19:18.845 ]' 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.845 23:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.104 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.104 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.104 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.104 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.104 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.104 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:19:20.040 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.040 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.040 23:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.040 23:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.040 23:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.040 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.040 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:20.040 23:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.040 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.041 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.608 00:19:20.608 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.608 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.608 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.868 { 00:19:20.868 "cntlid": 93, 00:19:20.868 "qid": 0, 00:19:20.868 "state": "enabled", 00:19:20.868 "thread": "nvmf_tgt_poll_group_000", 00:19:20.868 "listen_address": { 00:19:20.868 "trtype": "TCP", 00:19:20.868 "adrfam": "IPv4", 00:19:20.868 "traddr": "10.0.0.2", 00:19:20.868 "trsvcid": "4420" 00:19:20.868 }, 00:19:20.868 "peer_address": { 00:19:20.868 "trtype": "TCP", 00:19:20.868 "adrfam": "IPv4", 00:19:20.868 "traddr": "10.0.0.1", 00:19:20.868 "trsvcid": "50106" 00:19:20.868 }, 00:19:20.868 "auth": { 00:19:20.868 "state": "completed", 00:19:20.868 "digest": "sha384", 00:19:20.868 "dhgroup": "ffdhe8192" 00:19:20.868 } 00:19:20.868 } 00:19:20.868 ]' 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.868 23:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.128 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:19:21.699 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.699 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.699 23:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:21.699 23:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.699 23:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:21.699 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.699 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.699 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:21.959 23:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.959 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.529 00:19:22.529 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.529 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.530 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.530 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.530 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.530 23:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:22.530 23:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.530 23:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:22.530 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.530 { 00:19:22.530 "cntlid": 95, 00:19:22.530 "qid": 0, 00:19:22.530 "state": "enabled", 00:19:22.530 "thread": "nvmf_tgt_poll_group_000", 00:19:22.530 "listen_address": { 00:19:22.530 "trtype": "TCP", 00:19:22.530 "adrfam": "IPv4", 00:19:22.530 "traddr": "10.0.0.2", 00:19:22.530 "trsvcid": "4420" 00:19:22.530 }, 00:19:22.530 "peer_address": { 00:19:22.530 "trtype": "TCP", 00:19:22.530 "adrfam": "IPv4", 00:19:22.530 "traddr": "10.0.0.1", 00:19:22.530 "trsvcid": "50128" 00:19:22.530 }, 00:19:22.530 "auth": { 00:19:22.530 "state": "completed", 00:19:22.530 "digest": "sha384", 00:19:22.530 "dhgroup": "ffdhe8192" 00:19:22.530 } 00:19:22.530 } 00:19:22.530 ]' 00:19:22.530 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.790 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.790 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.790 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.790 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.790 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.790 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.790 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.050 23:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.622 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.883 23:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.883 00:19:23.883 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.883 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.883 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.143 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.144 { 00:19:24.144 "cntlid": 97, 00:19:24.144 "qid": 0, 00:19:24.144 "state": "enabled", 00:19:24.144 "thread": "nvmf_tgt_poll_group_000", 00:19:24.144 "listen_address": { 00:19:24.144 "trtype": "TCP", 00:19:24.144 "adrfam": "IPv4", 00:19:24.144 "traddr": "10.0.0.2", 00:19:24.144 "trsvcid": "4420" 00:19:24.144 }, 00:19:24.144 "peer_address": { 00:19:24.144 "trtype": "TCP", 00:19:24.144 "adrfam": "IPv4", 00:19:24.144 "traddr": "10.0.0.1", 00:19:24.144 "trsvcid": "50168" 00:19:24.144 }, 00:19:24.144 "auth": { 00:19:24.144 "state": "completed", 00:19:24.144 "digest": "sha512", 00:19:24.144 "dhgroup": "null" 00:19:24.144 } 00:19:24.144 } 00:19:24.144 ]' 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:24.144 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.404 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.404 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.404 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.404 23:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.344 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.605 00:19:25.605 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.605 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.605 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.605 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.605 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.605 23:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:25.605 23:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.605 23:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:25.605 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.605 { 00:19:25.605 "cntlid": 99, 00:19:25.605 "qid": 0, 00:19:25.605 "state": "enabled", 00:19:25.605 "thread": "nvmf_tgt_poll_group_000", 00:19:25.605 "listen_address": { 00:19:25.605 "trtype": "TCP", 00:19:25.605 "adrfam": "IPv4", 00:19:25.605 "traddr": "10.0.0.2", 00:19:25.605 "trsvcid": "4420" 00:19:25.605 }, 00:19:25.605 "peer_address": { 00:19:25.605 "trtype": "TCP", 00:19:25.605 "adrfam": "IPv4", 00:19:25.605 "traddr": "10.0.0.1", 00:19:25.605 "trsvcid": "50204" 00:19:25.605 }, 00:19:25.605 "auth": { 00:19:25.605 "state": "completed", 00:19:25.605 "digest": "sha512", 00:19:25.605 "dhgroup": "null" 00:19:25.605 } 00:19:25.605 } 00:19:25.605 ]' 00:19:25.865 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.865 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.865 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.865 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:25.865 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.865 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.865 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.865 23:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.125 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:19:26.694 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.694 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.694 23:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:26.695 23:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.695 23:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:26.695 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.695 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:26.695 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.954 23:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.214 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.214 { 00:19:27.214 "cntlid": 101, 00:19:27.214 "qid": 0, 00:19:27.214 "state": "enabled", 00:19:27.214 "thread": "nvmf_tgt_poll_group_000", 00:19:27.214 "listen_address": { 00:19:27.214 "trtype": "TCP", 00:19:27.214 "adrfam": "IPv4", 00:19:27.214 "traddr": "10.0.0.2", 00:19:27.214 "trsvcid": "4420" 00:19:27.214 }, 00:19:27.214 "peer_address": { 00:19:27.214 "trtype": "TCP", 00:19:27.214 "adrfam": "IPv4", 00:19:27.214 "traddr": "10.0.0.1", 00:19:27.214 "trsvcid": "50234" 00:19:27.214 }, 00:19:27.214 "auth": { 00:19:27.214 "state": "completed", 00:19:27.214 "digest": "sha512", 00:19:27.214 "dhgroup": "null" 00:19:27.214 } 00:19:27.214 } 00:19:27.214 ]' 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.214 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.473 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:27.473 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.473 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.473 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.473 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.473 23:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:19:28.412 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.413 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.672 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.672 { 00:19:28.672 "cntlid": 103, 00:19:28.672 "qid": 0, 00:19:28.672 "state": "enabled", 00:19:28.672 "thread": "nvmf_tgt_poll_group_000", 00:19:28.672 "listen_address": { 00:19:28.672 "trtype": "TCP", 00:19:28.672 "adrfam": "IPv4", 00:19:28.672 "traddr": "10.0.0.2", 00:19:28.672 "trsvcid": "4420" 00:19:28.672 }, 00:19:28.672 "peer_address": { 00:19:28.672 "trtype": "TCP", 00:19:28.672 "adrfam": "IPv4", 00:19:28.672 "traddr": "10.0.0.1", 00:19:28.672 "trsvcid": "50544" 00:19:28.672 }, 00:19:28.672 "auth": { 00:19:28.672 "state": "completed", 00:19:28.672 "digest": "sha512", 00:19:28.672 "dhgroup": "null" 00:19:28.672 } 00:19:28.672 } 00:19:28.672 ]' 00:19:28.672 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.932 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.932 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.932 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:28.932 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.932 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.932 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.932 23:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.191 23:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:19:29.761 23:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.761 23:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:29.761 23:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:29.761 23:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.761 23:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:29.761 23:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.761 23:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.761 23:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:29.761 23:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.021 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.281 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.281 { 00:19:30.281 "cntlid": 105, 00:19:30.281 "qid": 0, 00:19:30.281 "state": "enabled", 00:19:30.281 "thread": "nvmf_tgt_poll_group_000", 00:19:30.281 "listen_address": { 00:19:30.281 "trtype": "TCP", 00:19:30.281 "adrfam": "IPv4", 00:19:30.281 "traddr": "10.0.0.2", 00:19:30.281 "trsvcid": "4420" 00:19:30.281 }, 00:19:30.281 "peer_address": { 00:19:30.281 "trtype": "TCP", 00:19:30.281 "adrfam": "IPv4", 00:19:30.281 "traddr": "10.0.0.1", 00:19:30.281 "trsvcid": "50572" 00:19:30.281 }, 00:19:30.281 "auth": { 00:19:30.281 "state": "completed", 00:19:30.281 "digest": "sha512", 00:19:30.281 "dhgroup": "ffdhe2048" 00:19:30.281 } 00:19:30.281 } 00:19:30.281 ]' 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.281 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.542 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:30.542 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.542 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.542 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.542 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.542 23:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.483 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.743 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.743 { 00:19:31.743 "cntlid": 107, 00:19:31.743 "qid": 0, 00:19:31.743 "state": "enabled", 00:19:31.743 "thread": "nvmf_tgt_poll_group_000", 00:19:31.743 "listen_address": { 00:19:31.743 "trtype": "TCP", 00:19:31.743 "adrfam": "IPv4", 00:19:31.743 "traddr": "10.0.0.2", 00:19:31.743 "trsvcid": "4420" 00:19:31.743 }, 00:19:31.743 "peer_address": { 00:19:31.743 "trtype": "TCP", 00:19:31.743 "adrfam": "IPv4", 00:19:31.743 "traddr": "10.0.0.1", 00:19:31.743 "trsvcid": "50594" 00:19:31.743 }, 00:19:31.743 "auth": { 00:19:31.743 "state": "completed", 00:19:31.743 "digest": "sha512", 00:19:31.743 "dhgroup": "ffdhe2048" 00:19:31.743 } 00:19:31.743 } 00:19:31.743 ]' 00:19:31.743 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.003 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.003 23:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.003 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.003 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.003 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.003 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.003 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.264 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:19:32.835 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.835 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.835 23:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:32.835 23:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.835 23:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:32.835 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.835 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.835 23:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:33.095 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.096 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.356 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.356 { 00:19:33.356 "cntlid": 109, 00:19:33.356 "qid": 0, 00:19:33.356 "state": "enabled", 00:19:33.356 "thread": "nvmf_tgt_poll_group_000", 00:19:33.356 "listen_address": { 00:19:33.356 "trtype": "TCP", 00:19:33.356 "adrfam": "IPv4", 00:19:33.356 "traddr": "10.0.0.2", 00:19:33.356 "trsvcid": "4420" 00:19:33.356 }, 00:19:33.356 "peer_address": { 00:19:33.356 "trtype": "TCP", 00:19:33.356 "adrfam": "IPv4", 00:19:33.356 "traddr": "10.0.0.1", 00:19:33.356 "trsvcid": "50622" 00:19:33.356 }, 00:19:33.356 "auth": { 00:19:33.356 "state": "completed", 00:19:33.356 "digest": "sha512", 00:19:33.356 "dhgroup": "ffdhe2048" 00:19:33.356 } 00:19:33.356 } 00:19:33.356 ]' 00:19:33.356 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.617 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.617 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.617 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.617 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.617 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.617 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.617 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.617 23:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.602 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.908 00:19:34.908 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.908 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.908 23:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.908 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.908 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.908 23:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:34.908 23:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.908 23:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:34.908 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.908 { 00:19:34.908 "cntlid": 111, 00:19:34.908 "qid": 0, 00:19:34.908 "state": "enabled", 00:19:34.908 "thread": "nvmf_tgt_poll_group_000", 00:19:34.908 "listen_address": { 00:19:34.908 "trtype": "TCP", 00:19:34.908 "adrfam": "IPv4", 00:19:34.908 "traddr": "10.0.0.2", 00:19:34.908 "trsvcid": "4420" 00:19:34.908 }, 00:19:34.908 "peer_address": { 00:19:34.908 "trtype": "TCP", 00:19:34.908 "adrfam": "IPv4", 00:19:34.908 "traddr": "10.0.0.1", 00:19:34.908 "trsvcid": "50644" 00:19:34.908 }, 00:19:34.908 "auth": { 00:19:34.908 "state": "completed", 00:19:34.908 "digest": "sha512", 00:19:34.908 "dhgroup": "ffdhe2048" 00:19:34.908 } 00:19:34.908 } 00:19:34.908 ]' 00:19:34.908 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.169 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.169 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.169 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.169 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.169 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.169 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.169 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.430 23:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:19:36.001 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.001 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:36.001 23:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:36.001 23:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.001 23:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:36.001 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.001 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.001 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.001 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:36.260 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.261 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.519 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.519 { 00:19:36.519 "cntlid": 113, 00:19:36.519 "qid": 0, 00:19:36.519 "state": "enabled", 00:19:36.519 "thread": "nvmf_tgt_poll_group_000", 00:19:36.519 "listen_address": { 00:19:36.519 "trtype": "TCP", 00:19:36.519 "adrfam": "IPv4", 00:19:36.519 "traddr": "10.0.0.2", 00:19:36.519 "trsvcid": "4420" 00:19:36.519 }, 00:19:36.519 "peer_address": { 00:19:36.519 "trtype": "TCP", 00:19:36.519 "adrfam": "IPv4", 00:19:36.519 "traddr": "10.0.0.1", 00:19:36.519 "trsvcid": "50674" 00:19:36.519 }, 00:19:36.519 "auth": { 00:19:36.519 "state": "completed", 00:19:36.519 "digest": "sha512", 00:19:36.519 "dhgroup": "ffdhe3072" 00:19:36.519 } 00:19:36.519 } 00:19:36.519 ]' 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.519 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.779 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.779 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.779 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.779 23:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.718 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.977 00:19:37.977 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.977 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.977 23:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.977 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.977 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.977 23:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:37.977 23:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.236 23:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:38.236 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.236 { 00:19:38.236 "cntlid": 115, 00:19:38.236 "qid": 0, 00:19:38.236 "state": "enabled", 00:19:38.236 "thread": "nvmf_tgt_poll_group_000", 00:19:38.236 "listen_address": { 00:19:38.236 "trtype": "TCP", 00:19:38.236 "adrfam": "IPv4", 00:19:38.236 "traddr": "10.0.0.2", 00:19:38.236 "trsvcid": "4420" 00:19:38.236 }, 00:19:38.236 "peer_address": { 00:19:38.236 "trtype": "TCP", 00:19:38.236 "adrfam": "IPv4", 00:19:38.236 "traddr": "10.0.0.1", 00:19:38.236 "trsvcid": "50700" 00:19:38.236 }, 00:19:38.236 "auth": { 00:19:38.236 "state": "completed", 00:19:38.236 "digest": "sha512", 00:19:38.236 "dhgroup": "ffdhe3072" 00:19:38.236 } 00:19:38.236 } 00:19:38.236 ]' 00:19:38.236 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.236 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.236 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.236 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.237 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.237 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.237 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.237 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.495 23:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:19:39.064 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.064 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.064 23:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:39.064 23:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.064 23:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:39.064 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.064 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.064 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.325 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.325 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.585 { 00:19:39.585 "cntlid": 117, 00:19:39.585 "qid": 0, 00:19:39.585 "state": "enabled", 00:19:39.585 "thread": "nvmf_tgt_poll_group_000", 00:19:39.585 "listen_address": { 00:19:39.585 "trtype": "TCP", 00:19:39.585 "adrfam": "IPv4", 00:19:39.585 "traddr": "10.0.0.2", 00:19:39.585 "trsvcid": "4420" 00:19:39.585 }, 00:19:39.585 "peer_address": { 00:19:39.585 "trtype": "TCP", 00:19:39.585 "adrfam": "IPv4", 00:19:39.585 "traddr": "10.0.0.1", 00:19:39.585 "trsvcid": "45190" 00:19:39.585 }, 00:19:39.585 "auth": { 00:19:39.585 "state": "completed", 00:19:39.585 "digest": "sha512", 00:19:39.585 "dhgroup": "ffdhe3072" 00:19:39.585 } 00:19:39.585 } 00:19:39.585 ]' 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.585 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.845 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.845 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.845 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.845 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.845 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.845 23:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.786 23:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:41.046 00:19:41.046 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.046 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.046 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.306 { 00:19:41.306 "cntlid": 119, 00:19:41.306 "qid": 0, 00:19:41.306 "state": "enabled", 00:19:41.306 "thread": "nvmf_tgt_poll_group_000", 00:19:41.306 "listen_address": { 00:19:41.306 "trtype": "TCP", 00:19:41.306 "adrfam": "IPv4", 00:19:41.306 "traddr": "10.0.0.2", 00:19:41.306 "trsvcid": "4420" 00:19:41.306 }, 00:19:41.306 "peer_address": { 00:19:41.306 "trtype": "TCP", 00:19:41.306 "adrfam": "IPv4", 00:19:41.306 "traddr": "10.0.0.1", 00:19:41.306 "trsvcid": "45220" 00:19:41.306 }, 00:19:41.306 "auth": { 00:19:41.306 "state": "completed", 00:19:41.306 "digest": "sha512", 00:19:41.306 "dhgroup": "ffdhe3072" 00:19:41.306 } 00:19:41.306 } 00:19:41.306 ]' 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.306 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.307 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.307 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.565 23:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:19:42.135 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.135 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.135 23:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:42.135 23:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.135 23:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:42.135 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.135 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.135 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:42.135 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.395 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.656 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.656 { 00:19:42.656 "cntlid": 121, 00:19:42.656 "qid": 0, 00:19:42.656 "state": "enabled", 00:19:42.656 "thread": "nvmf_tgt_poll_group_000", 00:19:42.656 "listen_address": { 00:19:42.656 "trtype": "TCP", 00:19:42.656 "adrfam": "IPv4", 00:19:42.656 "traddr": "10.0.0.2", 00:19:42.656 "trsvcid": "4420" 00:19:42.656 }, 00:19:42.656 "peer_address": { 00:19:42.656 "trtype": "TCP", 00:19:42.656 "adrfam": "IPv4", 00:19:42.656 "traddr": "10.0.0.1", 00:19:42.656 "trsvcid": "45240" 00:19:42.656 }, 00:19:42.656 "auth": { 00:19:42.656 "state": "completed", 00:19:42.656 "digest": "sha512", 00:19:42.656 "dhgroup": "ffdhe4096" 00:19:42.656 } 00:19:42.656 } 00:19:42.656 ]' 00:19:42.656 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.917 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.917 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.917 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.917 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.918 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.918 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.918 23:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.918 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:43.860 23:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.860 23:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:43.860 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.860 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.120 00:19:44.120 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.120 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.120 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.381 { 00:19:44.381 "cntlid": 123, 00:19:44.381 "qid": 0, 00:19:44.381 "state": "enabled", 00:19:44.381 "thread": "nvmf_tgt_poll_group_000", 00:19:44.381 "listen_address": { 00:19:44.381 "trtype": "TCP", 00:19:44.381 "adrfam": "IPv4", 00:19:44.381 "traddr": "10.0.0.2", 00:19:44.381 "trsvcid": "4420" 00:19:44.381 }, 00:19:44.381 "peer_address": { 00:19:44.381 "trtype": "TCP", 00:19:44.381 "adrfam": "IPv4", 00:19:44.381 "traddr": "10.0.0.1", 00:19:44.381 "trsvcid": "45272" 00:19:44.381 }, 00:19:44.381 "auth": { 00:19:44.381 "state": "completed", 00:19:44.381 "digest": "sha512", 00:19:44.381 "dhgroup": "ffdhe4096" 00:19:44.381 } 00:19:44.381 } 00:19:44.381 ]' 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.381 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.642 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.642 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.642 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.642 23:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.584 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:45.585 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.585 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.585 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.585 23:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:45.585 23:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.585 23:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:45.585 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.585 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.846 00:19:45.846 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.846 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.846 23:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.846 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.846 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.846 23:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:45.846 23:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.106 { 00:19:46.106 "cntlid": 125, 00:19:46.106 "qid": 0, 00:19:46.106 "state": "enabled", 00:19:46.106 "thread": "nvmf_tgt_poll_group_000", 00:19:46.106 "listen_address": { 00:19:46.106 "trtype": "TCP", 00:19:46.106 "adrfam": "IPv4", 00:19:46.106 "traddr": "10.0.0.2", 00:19:46.106 "trsvcid": "4420" 00:19:46.106 }, 00:19:46.106 "peer_address": { 00:19:46.106 "trtype": "TCP", 00:19:46.106 "adrfam": "IPv4", 00:19:46.106 "traddr": "10.0.0.1", 00:19:46.106 "trsvcid": "45304" 00:19:46.106 }, 00:19:46.106 "auth": { 00:19:46.106 "state": "completed", 00:19:46.106 "digest": "sha512", 00:19:46.106 "dhgroup": "ffdhe4096" 00:19:46.106 } 00:19:46.106 } 00:19:46.106 ]' 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.106 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.366 23:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:19:46.935 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.935 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:46.935 23:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:46.935 23:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.935 23:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:46.935 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.935 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:46.935 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.194 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:47.194 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.194 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.194 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:47.194 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.194 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.195 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:47.195 23:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:47.195 23:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.195 23:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:47.195 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.195 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.455 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.455 { 00:19:47.455 "cntlid": 127, 00:19:47.455 "qid": 0, 00:19:47.455 "state": "enabled", 00:19:47.455 "thread": "nvmf_tgt_poll_group_000", 00:19:47.455 "listen_address": { 00:19:47.455 "trtype": "TCP", 00:19:47.455 "adrfam": "IPv4", 00:19:47.455 "traddr": "10.0.0.2", 00:19:47.455 "trsvcid": "4420" 00:19:47.455 }, 00:19:47.455 "peer_address": { 00:19:47.455 "trtype": "TCP", 00:19:47.455 "adrfam": "IPv4", 00:19:47.455 "traddr": "10.0.0.1", 00:19:47.455 "trsvcid": "45344" 00:19:47.455 }, 00:19:47.455 "auth": { 00:19:47.455 "state": "completed", 00:19:47.455 "digest": "sha512", 00:19:47.455 "dhgroup": "ffdhe4096" 00:19:47.455 } 00:19:47.455 } 00:19:47.455 ]' 00:19:47.455 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.715 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.715 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.715 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.715 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.715 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.715 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.715 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.975 23:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:48.544 23:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.805 23:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:48.805 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.805 23:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.064 00:19:49.064 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.064 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.064 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.324 { 00:19:49.324 "cntlid": 129, 00:19:49.324 "qid": 0, 00:19:49.324 "state": "enabled", 00:19:49.324 "thread": "nvmf_tgt_poll_group_000", 00:19:49.324 "listen_address": { 00:19:49.324 "trtype": "TCP", 00:19:49.324 "adrfam": "IPv4", 00:19:49.324 "traddr": "10.0.0.2", 00:19:49.324 "trsvcid": "4420" 00:19:49.324 }, 00:19:49.324 "peer_address": { 00:19:49.324 "trtype": "TCP", 00:19:49.324 "adrfam": "IPv4", 00:19:49.324 "traddr": "10.0.0.1", 00:19:49.324 "trsvcid": "50198" 00:19:49.324 }, 00:19:49.324 "auth": { 00:19:49.324 "state": "completed", 00:19:49.324 "digest": "sha512", 00:19:49.324 "dhgroup": "ffdhe6144" 00:19:49.324 } 00:19:49.324 } 00:19:49.324 ]' 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.324 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.585 23:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:19:50.155 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.155 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.155 23:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:50.155 23:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.155 23:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:50.155 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.155 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.155 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.415 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.676 00:19:50.676 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.676 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.676 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.937 { 00:19:50.937 "cntlid": 131, 00:19:50.937 "qid": 0, 00:19:50.937 "state": "enabled", 00:19:50.937 "thread": "nvmf_tgt_poll_group_000", 00:19:50.937 "listen_address": { 00:19:50.937 "trtype": "TCP", 00:19:50.937 "adrfam": "IPv4", 00:19:50.937 "traddr": "10.0.0.2", 00:19:50.937 "trsvcid": "4420" 00:19:50.937 }, 00:19:50.937 "peer_address": { 00:19:50.937 "trtype": "TCP", 00:19:50.937 "adrfam": "IPv4", 00:19:50.937 "traddr": "10.0.0.1", 00:19:50.937 "trsvcid": "50222" 00:19:50.937 }, 00:19:50.937 "auth": { 00:19:50.937 "state": "completed", 00:19:50.937 "digest": "sha512", 00:19:50.937 "dhgroup": "ffdhe6144" 00:19:50.937 } 00:19:50.937 } 00:19:50.937 ]' 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.937 23:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.937 23:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.937 23:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.937 23:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.197 23:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:19:51.769 23:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.769 23:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:51.769 23:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:51.769 23:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.769 23:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:51.769 23:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.769 23:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:51.769 23:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.030 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.291 00:19:52.291 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.291 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.291 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.551 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.552 { 00:19:52.552 "cntlid": 133, 00:19:52.552 "qid": 0, 00:19:52.552 "state": "enabled", 00:19:52.552 "thread": "nvmf_tgt_poll_group_000", 00:19:52.552 "listen_address": { 00:19:52.552 "trtype": "TCP", 00:19:52.552 "adrfam": "IPv4", 00:19:52.552 "traddr": "10.0.0.2", 00:19:52.552 "trsvcid": "4420" 00:19:52.552 }, 00:19:52.552 "peer_address": { 00:19:52.552 "trtype": "TCP", 00:19:52.552 "adrfam": "IPv4", 00:19:52.552 "traddr": "10.0.0.1", 00:19:52.552 "trsvcid": "50248" 00:19:52.552 }, 00:19:52.552 "auth": { 00:19:52.552 "state": "completed", 00:19:52.552 "digest": "sha512", 00:19:52.552 "dhgroup": "ffdhe6144" 00:19:52.552 } 00:19:52.552 } 00:19:52.552 ]' 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.552 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.812 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.812 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.812 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.812 23:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.756 23:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.018 00:19:54.018 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.018 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.018 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.279 { 00:19:54.279 "cntlid": 135, 00:19:54.279 "qid": 0, 00:19:54.279 "state": "enabled", 00:19:54.279 "thread": "nvmf_tgt_poll_group_000", 00:19:54.279 "listen_address": { 00:19:54.279 "trtype": "TCP", 00:19:54.279 "adrfam": "IPv4", 00:19:54.279 "traddr": "10.0.0.2", 00:19:54.279 "trsvcid": "4420" 00:19:54.279 }, 00:19:54.279 "peer_address": { 00:19:54.279 "trtype": "TCP", 00:19:54.279 "adrfam": "IPv4", 00:19:54.279 "traddr": "10.0.0.1", 00:19:54.279 "trsvcid": "50264" 00:19:54.279 }, 00:19:54.279 "auth": { 00:19:54.279 "state": "completed", 00:19:54.279 "digest": "sha512", 00:19:54.279 "dhgroup": "ffdhe6144" 00:19:54.279 } 00:19:54.279 } 00:19:54.279 ]' 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.279 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.538 23:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.480 23:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.052 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.052 { 00:19:56.052 "cntlid": 137, 00:19:56.052 "qid": 0, 00:19:56.052 "state": "enabled", 00:19:56.052 "thread": "nvmf_tgt_poll_group_000", 00:19:56.052 "listen_address": { 00:19:56.052 "trtype": "TCP", 00:19:56.052 "adrfam": "IPv4", 00:19:56.052 "traddr": "10.0.0.2", 00:19:56.052 "trsvcid": "4420" 00:19:56.052 }, 00:19:56.052 "peer_address": { 00:19:56.052 "trtype": "TCP", 00:19:56.052 "adrfam": "IPv4", 00:19:56.052 "traddr": "10.0.0.1", 00:19:56.052 "trsvcid": "50296" 00:19:56.052 }, 00:19:56.052 "auth": { 00:19:56.052 "state": "completed", 00:19:56.052 "digest": "sha512", 00:19:56.052 "dhgroup": "ffdhe8192" 00:19:56.052 } 00:19:56.052 } 00:19:56.052 ]' 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.052 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.313 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.313 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.313 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.313 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.313 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.313 23:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.253 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.821 00:19:57.821 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.821 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.821 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.821 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.821 23:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.821 23:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:57.821 23:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.821 23:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:57.821 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.821 { 00:19:57.821 "cntlid": 139, 00:19:57.821 "qid": 0, 00:19:57.821 "state": "enabled", 00:19:57.821 "thread": "nvmf_tgt_poll_group_000", 00:19:57.821 "listen_address": { 00:19:57.821 "trtype": "TCP", 00:19:57.821 "adrfam": "IPv4", 00:19:57.821 "traddr": "10.0.0.2", 00:19:57.821 "trsvcid": "4420" 00:19:57.821 }, 00:19:57.821 "peer_address": { 00:19:57.821 "trtype": "TCP", 00:19:57.821 "adrfam": "IPv4", 00:19:57.821 "traddr": "10.0.0.1", 00:19:57.821 "trsvcid": "50320" 00:19:57.821 }, 00:19:57.821 "auth": { 00:19:57.821 "state": "completed", 00:19:57.821 "digest": "sha512", 00:19:57.821 "dhgroup": "ffdhe8192" 00:19:57.821 } 00:19:57.821 } 00:19:57.821 ]' 00:19:57.821 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.081 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.081 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.081 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.081 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.081 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.081 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.081 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.340 23:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MjVkNzI1MTNiNmM4NjJjYzE5ZDAxNDI3OWFlMDE4OWXgMoGK: --dhchap-ctrl-secret DHHC-1:02:ZTkyYjAyMTliZmRlYzQ1YTM4N2NiNDZjMDg2YTlkNmVmY2YwM2YwYjNiZmVjNWQ4pfzi3g==: 00:19:58.908 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.908 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:58.908 23:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:58.908 23:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.908 23:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:58.908 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.908 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.908 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.167 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:59.167 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.167 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.167 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.167 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.167 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.168 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.168 23:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:59.168 23:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.168 23:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:59.168 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.168 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.737 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.737 { 00:19:59.737 "cntlid": 141, 00:19:59.737 "qid": 0, 00:19:59.737 "state": "enabled", 00:19:59.737 "thread": "nvmf_tgt_poll_group_000", 00:19:59.737 "listen_address": { 00:19:59.737 "trtype": "TCP", 00:19:59.737 "adrfam": "IPv4", 00:19:59.737 "traddr": "10.0.0.2", 00:19:59.737 "trsvcid": "4420" 00:19:59.737 }, 00:19:59.737 "peer_address": { 00:19:59.737 "trtype": "TCP", 00:19:59.737 "adrfam": "IPv4", 00:19:59.737 "traddr": "10.0.0.1", 00:19:59.737 "trsvcid": "50182" 00:19:59.737 }, 00:19:59.737 "auth": { 00:19:59.737 "state": "completed", 00:19:59.737 "digest": "sha512", 00:19:59.737 "dhgroup": "ffdhe8192" 00:19:59.737 } 00:19:59.737 } 00:19:59.737 ]' 00:19:59.737 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.997 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.997 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.997 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.997 23:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.997 23:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.997 23:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.997 23:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.257 23:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MjI5NjRhZmNiZGE4NWM0NDM3M2FhMjZmM2ZkNjNkOGM4ZmQ1NjVmYjI1ZjY1YTFlxsgRCQ==: --dhchap-ctrl-secret DHHC-1:01:NTk1NmU4YWU1YmYzZTlmZTI2YzQxOGJmMjMyMTQ2YWbsuYLn: 00:20:00.827 23:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.827 23:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:00.827 23:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:00.827 23:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.827 23:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:00.827 23:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.827 23:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.827 23:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.104 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.418 00:20:01.418 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.418 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.418 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.692 { 00:20:01.692 "cntlid": 143, 00:20:01.692 "qid": 0, 00:20:01.692 "state": "enabled", 00:20:01.692 "thread": "nvmf_tgt_poll_group_000", 00:20:01.692 "listen_address": { 00:20:01.692 "trtype": "TCP", 00:20:01.692 "adrfam": "IPv4", 00:20:01.692 "traddr": "10.0.0.2", 00:20:01.692 "trsvcid": "4420" 00:20:01.692 }, 00:20:01.692 "peer_address": { 00:20:01.692 "trtype": "TCP", 00:20:01.692 "adrfam": "IPv4", 00:20:01.692 "traddr": "10.0.0.1", 00:20:01.692 "trsvcid": "50218" 00:20:01.692 }, 00:20:01.692 "auth": { 00:20:01.692 "state": "completed", 00:20:01.692 "digest": "sha512", 00:20:01.692 "dhgroup": "ffdhe8192" 00:20:01.692 } 00:20:01.692 } 00:20:01.692 ]' 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.692 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.953 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.953 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.953 23:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.953 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.893 23:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.464 00:20:03.464 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.464 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.464 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.464 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.464 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.464 23:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.464 23:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.724 { 00:20:03.724 "cntlid": 145, 00:20:03.724 "qid": 0, 00:20:03.724 "state": "enabled", 00:20:03.724 "thread": "nvmf_tgt_poll_group_000", 00:20:03.724 "listen_address": { 00:20:03.724 "trtype": "TCP", 00:20:03.724 "adrfam": "IPv4", 00:20:03.724 "traddr": "10.0.0.2", 00:20:03.724 "trsvcid": "4420" 00:20:03.724 }, 00:20:03.724 "peer_address": { 00:20:03.724 "trtype": "TCP", 00:20:03.724 "adrfam": "IPv4", 00:20:03.724 "traddr": "10.0.0.1", 00:20:03.724 "trsvcid": "50250" 00:20:03.724 }, 00:20:03.724 "auth": { 00:20:03.724 "state": "completed", 00:20:03.724 "digest": "sha512", 00:20:03.724 "dhgroup": "ffdhe8192" 00:20:03.724 } 00:20:03.724 } 00:20:03.724 ]' 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.724 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.984 23:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:Mjc0ZTY3M2MyMDZhNjI0MDJhMDI3NjIxMjU1OTEwMTY0OGFjOWVmOTBlZWI0YzY1Q0Sqzg==: --dhchap-ctrl-secret DHHC-1:03:ODFmMDQ4MDI3YjkzMjk3NzFjMGUxOWQzNjhmMjcyYzU2MWVhYWMwYzE4ODI0ZmFlZWE5ZjM1OGUwZDJlODk2ZVyK/e4=: 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:20:04.554 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:04.555 23:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.555 23:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:05.126 request: 00:20:05.126 { 00:20:05.126 "name": "nvme0", 00:20:05.126 "trtype": "tcp", 00:20:05.126 "traddr": "10.0.0.2", 00:20:05.126 "adrfam": "ipv4", 00:20:05.126 "trsvcid": "4420", 00:20:05.126 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:05.126 "prchk_reftag": false, 00:20:05.126 "prchk_guard": false, 00:20:05.126 "hdgst": false, 00:20:05.126 "ddgst": false, 00:20:05.126 "dhchap_key": "key2", 00:20:05.126 "method": "bdev_nvme_attach_controller", 00:20:05.126 "req_id": 1 00:20:05.126 } 00:20:05.126 Got JSON-RPC error response 00:20:05.126 response: 00:20:05.126 { 00:20:05.126 "code": -5, 00:20:05.126 "message": "Input/output error" 00:20:05.126 } 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.126 23:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.699 request: 00:20:05.699 { 00:20:05.699 "name": "nvme0", 00:20:05.699 "trtype": "tcp", 00:20:05.699 "traddr": "10.0.0.2", 00:20:05.699 "adrfam": "ipv4", 00:20:05.699 "trsvcid": "4420", 00:20:05.699 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:05.699 "prchk_reftag": false, 00:20:05.699 "prchk_guard": false, 00:20:05.699 "hdgst": false, 00:20:05.699 "ddgst": false, 00:20:05.699 "dhchap_key": "key1", 00:20:05.699 "dhchap_ctrlr_key": "ckey2", 00:20:05.699 "method": "bdev_nvme_attach_controller", 00:20:05.699 "req_id": 1 00:20:05.699 } 00:20:05.699 Got JSON-RPC error response 00:20:05.699 response: 00:20:05.699 { 00:20:05.699 "code": -5, 00:20:05.699 "message": "Input/output error" 00:20:05.699 } 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.699 23:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.960 request: 00:20:05.960 { 00:20:05.960 "name": "nvme0", 00:20:05.960 "trtype": "tcp", 00:20:05.960 "traddr": "10.0.0.2", 00:20:05.960 "adrfam": "ipv4", 00:20:05.960 "trsvcid": "4420", 00:20:05.960 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:05.960 "prchk_reftag": false, 00:20:05.960 "prchk_guard": false, 00:20:05.960 "hdgst": false, 00:20:05.960 "ddgst": false, 00:20:05.960 "dhchap_key": "key1", 00:20:05.960 "dhchap_ctrlr_key": "ckey1", 00:20:05.960 "method": "bdev_nvme_attach_controller", 00:20:05.960 "req_id": 1 00:20:05.960 } 00:20:05.960 Got JSON-RPC error response 00:20:05.960 response: 00:20:05.960 { 00:20:05.960 "code": -5, 00:20:05.960 "message": "Input/output error" 00:20:05.960 } 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 457319 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 457319 ']' 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 457319 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 457319 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 457319' 00:20:06.222 killing process with pid 457319 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 457319 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 457319 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=482708 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 482708 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 482708 ']' 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:06.222 23:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 482708 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 482708 ']' 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:07.167 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.428 23:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.000 00:20:08.000 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.000 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.000 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.260 { 00:20:08.260 "cntlid": 1, 00:20:08.260 "qid": 0, 00:20:08.260 "state": "enabled", 00:20:08.260 "thread": "nvmf_tgt_poll_group_000", 00:20:08.260 "listen_address": { 00:20:08.260 "trtype": "TCP", 00:20:08.260 "adrfam": "IPv4", 00:20:08.260 "traddr": "10.0.0.2", 00:20:08.260 "trsvcid": "4420" 00:20:08.260 }, 00:20:08.260 "peer_address": { 00:20:08.260 "trtype": "TCP", 00:20:08.260 "adrfam": "IPv4", 00:20:08.260 "traddr": "10.0.0.1", 00:20:08.260 "trsvcid": "50314" 00:20:08.260 }, 00:20:08.260 "auth": { 00:20:08.260 "state": "completed", 00:20:08.260 "digest": "sha512", 00:20:08.260 "dhgroup": "ffdhe8192" 00:20:08.260 } 00:20:08.260 } 00:20:08.260 ]' 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.260 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.520 23:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NDFjNDNjYjYxZThjYjBmZTQ0MGY4OGFkNGJjZmU3MTBlZTQzNzkwOWQxNjM4ODI2OTQ4NGFjNmU2YTgwYTg5MYax0PQ=: 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:09.092 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:09.352 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.352 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:20:09.352 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.352 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:20:09.352 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:09.352 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:20:09.352 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:09.352 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.352 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.612 request: 00:20:09.612 { 00:20:09.612 "name": "nvme0", 00:20:09.612 "trtype": "tcp", 00:20:09.612 "traddr": "10.0.0.2", 00:20:09.612 "adrfam": "ipv4", 00:20:09.612 "trsvcid": "4420", 00:20:09.612 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:09.613 "prchk_reftag": false, 00:20:09.613 "prchk_guard": false, 00:20:09.613 "hdgst": false, 00:20:09.613 "ddgst": false, 00:20:09.613 "dhchap_key": "key3", 00:20:09.613 "method": "bdev_nvme_attach_controller", 00:20:09.613 "req_id": 1 00:20:09.613 } 00:20:09.613 Got JSON-RPC error response 00:20:09.613 response: 00:20:09.613 { 00:20:09.613 "code": -5, 00:20:09.613 "message": "Input/output error" 00:20:09.613 } 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.613 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.874 request: 00:20:09.874 { 00:20:09.874 "name": "nvme0", 00:20:09.874 "trtype": "tcp", 00:20:09.874 "traddr": "10.0.0.2", 00:20:09.874 "adrfam": "ipv4", 00:20:09.874 "trsvcid": "4420", 00:20:09.874 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:09.874 "prchk_reftag": false, 00:20:09.874 "prchk_guard": false, 00:20:09.874 "hdgst": false, 00:20:09.874 "ddgst": false, 00:20:09.874 "dhchap_key": "key3", 00:20:09.874 "method": "bdev_nvme_attach_controller", 00:20:09.874 "req_id": 1 00:20:09.874 } 00:20:09.874 Got JSON-RPC error response 00:20:09.874 response: 00:20:09.874 { 00:20:09.874 "code": -5, 00:20:09.874 "message": "Input/output error" 00:20:09.874 } 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:09.874 23:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.874 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.135 request: 00:20:10.135 { 00:20:10.135 "name": "nvme0", 00:20:10.135 "trtype": "tcp", 00:20:10.135 "traddr": "10.0.0.2", 00:20:10.135 "adrfam": "ipv4", 00:20:10.135 "trsvcid": "4420", 00:20:10.135 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:10.135 "prchk_reftag": false, 00:20:10.135 "prchk_guard": false, 00:20:10.135 "hdgst": false, 00:20:10.135 "ddgst": false, 00:20:10.135 "dhchap_key": "key0", 00:20:10.135 "dhchap_ctrlr_key": "key1", 00:20:10.135 "method": "bdev_nvme_attach_controller", 00:20:10.135 "req_id": 1 00:20:10.135 } 00:20:10.135 Got JSON-RPC error response 00:20:10.135 response: 00:20:10.135 { 00:20:10.135 "code": -5, 00:20:10.135 "message": "Input/output error" 00:20:10.135 } 00:20:10.135 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:20:10.135 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:10.135 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:10.135 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:10.135 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:10.135 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:10.396 00:20:10.396 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:10.396 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:10.396 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 457664 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 457664 ']' 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 457664 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 457664 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 457664' 00:20:10.657 killing process with pid 457664 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 457664 00:20:10.657 23:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 457664 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.918 rmmod nvme_tcp 00:20:10.918 rmmod nvme_fabrics 00:20:10.918 rmmod nvme_keyring 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 482708 ']' 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 482708 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 482708 ']' 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 482708 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:10.918 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 482708 00:20:11.179 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:11.179 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:11.179 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 482708' 00:20:11.180 killing process with pid 482708 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 482708 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 482708 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.180 23:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.729 23:56:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.729 23:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.5kv /tmp/spdk.key-sha256.fGp /tmp/spdk.key-sha384.Sz0 /tmp/spdk.key-sha512.cmI /tmp/spdk.key-sha512.QLq /tmp/spdk.key-sha384.vWc /tmp/spdk.key-sha256.9ze '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:13.729 00:20:13.729 real 2m19.681s 00:20:13.729 user 5m9.119s 00:20:13.729 sys 0m19.794s 00:20:13.729 23:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:13.729 23:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.729 ************************************ 00:20:13.729 END TEST nvmf_auth_target 00:20:13.729 ************************************ 00:20:13.729 23:56:28 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:13.729 23:56:28 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:13.729 23:56:28 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:13.729 23:56:28 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:20:13.729 23:56:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:13.729 23:56:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.729 ************************************ 00:20:13.729 START TEST nvmf_bdevio_no_huge 00:20:13.729 ************************************ 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:13.729 * Looking for test storage... 00:20:13.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.729 23:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.873 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:21.874 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:21.874 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:21.874 Found net devices under 0000:31:00.0: cvl_0_0 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:21.874 Found net devices under 0000:31:00.1: cvl_0_1 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:20:21.874 00:20:21.874 --- 10.0.0.2 ping statistics --- 00:20:21.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.874 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:20:21.874 00:20:21.874 --- 10.0.0.1 ping statistics --- 00:20:21.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.874 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=488422 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 488422 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@823 -- # '[' -z 488422 ']' 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:21.874 23:56:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.874 [2024-07-15 23:56:36.810967] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:20:21.874 [2024-07-15 23:56:36.811027] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:21.874 [2024-07-15 23:56:36.909862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.874 [2024-07-15 23:56:37.016320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.874 [2024-07-15 23:56:37.016374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.874 [2024-07-15 23:56:37.016383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.874 [2024-07-15 23:56:37.016390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.874 [2024-07-15 23:56:37.016396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.874 [2024-07-15 23:56:37.016556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:21.874 [2024-07-15 23:56:37.016702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:21.874 [2024-07-15 23:56:37.016864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.874 [2024-07-15 23:56:37.016865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:22.447 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:22.447 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # return 0 00:20:22.447 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.447 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:22.447 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.709 [2024-07-15 23:56:37.653894] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.709 Malloc0 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.709 [2024-07-15 23:56:37.695287] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:22.709 { 00:20:22.709 "params": { 00:20:22.709 "name": "Nvme$subsystem", 00:20:22.709 "trtype": "$TEST_TRANSPORT", 00:20:22.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.709 "adrfam": "ipv4", 00:20:22.709 "trsvcid": "$NVMF_PORT", 00:20:22.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.709 "hdgst": ${hdgst:-false}, 00:20:22.709 "ddgst": ${ddgst:-false} 00:20:22.709 }, 00:20:22.709 "method": "bdev_nvme_attach_controller" 00:20:22.709 } 00:20:22.709 EOF 00:20:22.709 )") 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:22.709 23:56:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:22.709 "params": { 00:20:22.710 "name": "Nvme1", 00:20:22.710 "trtype": "tcp", 00:20:22.710 "traddr": "10.0.0.2", 00:20:22.710 "adrfam": "ipv4", 00:20:22.710 "trsvcid": "4420", 00:20:22.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.710 "hdgst": false, 00:20:22.710 "ddgst": false 00:20:22.710 }, 00:20:22.710 "method": "bdev_nvme_attach_controller" 00:20:22.710 }' 00:20:22.710 [2024-07-15 23:56:37.751539] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:20:22.710 [2024-07-15 23:56:37.751613] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid488469 ] 00:20:22.710 [2024-07-15 23:56:37.829496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:22.971 [2024-07-15 23:56:37.925535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.971 [2024-07-15 23:56:37.925681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.971 [2024-07-15 23:56:37.925684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.971 I/O targets: 00:20:22.971 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:22.971 00:20:22.971 00:20:22.971 CUnit - A unit testing framework for C - Version 2.1-3 00:20:22.971 http://cunit.sourceforge.net/ 00:20:22.971 00:20:22.971 00:20:22.971 Suite: bdevio tests on: Nvme1n1 00:20:22.971 Test: blockdev write read block ...passed 00:20:22.971 Test: blockdev write zeroes read block ...passed 00:20:23.232 Test: blockdev write zeroes read no split ...passed 00:20:23.232 Test: blockdev write zeroes read split ...passed 00:20:23.232 Test: blockdev write zeroes read split partial ...passed 00:20:23.232 Test: blockdev reset ...[2024-07-15 23:56:38.244575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.232 [2024-07-15 23:56:38.244635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea8970 (9): Bad file descriptor 00:20:23.232 [2024-07-15 23:56:38.386746] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:23.232 passed 00:20:23.232 Test: blockdev write read 8 blocks ...passed 00:20:23.232 Test: blockdev write read size > 128k ...passed 00:20:23.232 Test: blockdev write read invalid size ...passed 00:20:23.493 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:23.493 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:23.493 Test: blockdev write read max offset ...passed 00:20:23.493 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:23.493 Test: blockdev writev readv 8 blocks ...passed 00:20:23.493 Test: blockdev writev readv 30 x 1block ...passed 00:20:23.493 Test: blockdev writev readv block ...passed 00:20:23.493 Test: blockdev writev readv size > 128k ...passed 00:20:23.493 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:23.493 Test: blockdev comparev and writev ...[2024-07-15 23:56:38.566630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.493 [2024-07-15 23:56:38.566653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.493 [2024-07-15 23:56:38.566663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.493 [2024-07-15 23:56:38.566669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:23.493 [2024-07-15 23:56:38.566945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.493 [2024-07-15 23:56:38.566953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:23.493 [2024-07-15 23:56:38.566963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.493 [2024-07-15 23:56:38.566968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:23.493 [2024-07-15 23:56:38.567239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.493 [2024-07-15 23:56:38.567247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:23.493 [2024-07-15 23:56:38.567256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.493 [2024-07-15 23:56:38.567261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:23.493 [2024-07-15 23:56:38.567643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.493 [2024-07-15 23:56:38.567650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:23.493 [2024-07-15 23:56:38.567660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.493 [2024-07-15 23:56:38.567665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:23.493 passed 00:20:23.493 Test: blockdev nvme passthru rw ...passed 00:20:23.493 Test: blockdev nvme passthru vendor specific ...[2024-07-15 23:56:38.651891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.494 [2024-07-15 23:56:38.651905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:23.494 [2024-07-15 23:56:38.652194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.494 [2024-07-15 23:56:38.652202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:23.494 [2024-07-15 23:56:38.652484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.494 [2024-07-15 23:56:38.652491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:23.494 [2024-07-15 23:56:38.652774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.494 [2024-07-15 23:56:38.652790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:23.494 passed 00:20:23.494 Test: blockdev nvme admin passthru ...passed 00:20:23.755 Test: blockdev copy ...passed 00:20:23.755 00:20:23.755 Run Summary: Type Total Ran Passed Failed Inactive 00:20:23.755 suites 1 1 n/a 0 0 00:20:23.755 tests 23 23 23 0 0 00:20:23.755 asserts 152 152 152 0 n/a 00:20:23.755 00:20:23.755 Elapsed time = 1.289 seconds 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.016 23:56:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.016 rmmod nvme_tcp 00:20:24.016 rmmod nvme_fabrics 00:20:24.016 rmmod nvme_keyring 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 488422 ']' 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 488422 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@942 -- # '[' -z 488422 ']' 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # kill -0 488422 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # uname 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 488422 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # process_name=reactor_3 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' reactor_3 = sudo ']' 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # echo 'killing process with pid 488422' 00:20:24.016 killing process with pid 488422 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@961 -- # kill 488422 00:20:24.016 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # wait 488422 00:20:24.278 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.278 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.278 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.278 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.278 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.278 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.278 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.278 23:56:39 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.822 23:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.822 00:20:26.822 real 0m13.084s 00:20:26.822 user 0m13.788s 00:20:26.822 sys 0m6.974s 00:20:26.822 23:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:26.822 23:56:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:26.822 ************************************ 00:20:26.822 END TEST nvmf_bdevio_no_huge 00:20:26.822 ************************************ 00:20:26.822 23:56:41 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:26.822 23:56:41 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:26.822 23:56:41 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:26.822 23:56:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:26.822 23:56:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:26.822 ************************************ 00:20:26.822 START TEST nvmf_tls 00:20:26.822 ************************************ 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:26.822 * Looking for test storage... 00:20:26.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.822 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.823 23:56:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:34.964 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:34.964 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.964 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:34.965 Found net devices under 0000:31:00.0: cvl_0_0 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:34.965 Found net devices under 0000:31:00.1: cvl_0_1 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:20:34.965 00:20:34.965 --- 10.0.0.2 ping statistics --- 00:20:34.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.965 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:20:34.965 00:20:34.965 --- 10.0.0.1 ping statistics --- 00:20:34.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.965 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=493480 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 493480 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 493480 ']' 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:34.965 23:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.965 [2024-07-15 23:56:50.042004] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:20:34.965 [2024-07-15 23:56:50.042111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.965 [2024-07-15 23:56:50.143774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.225 [2024-07-15 23:56:50.235897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.225 [2024-07-15 23:56:50.235962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.225 [2024-07-15 23:56:50.235970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.225 [2024-07-15 23:56:50.235978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.225 [2024-07-15 23:56:50.235984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.225 [2024-07-15 23:56:50.236009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.794 23:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:35.794 23:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:20:35.794 23:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.794 23:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:35.794 23:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.794 23:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.794 23:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:35.794 23:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:36.053 true 00:20:36.053 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.053 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:36.053 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:36.053 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:36.053 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:36.314 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.314 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:36.573 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:36.573 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:36.573 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:36.573 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.573 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:36.832 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:36.832 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:36.833 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.833 23:56:51 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:37.093 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:37.093 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:37.093 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:37.093 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:37.093 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:37.354 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:37.354 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:37.354 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:37.614 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:37.615 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:37.615 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.615 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:37.615 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:37.615 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:37.615 23:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.omXxutgYnr 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.iEavPzbqjC 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.omXxutgYnr 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iEavPzbqjC 00:20:37.875 23:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:37.875 23:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:38.136 23:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.omXxutgYnr 00:20:38.136 23:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.omXxutgYnr 00:20:38.137 23:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:38.396 [2024-07-15 23:56:53.421366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.396 23:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:38.656 23:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:38.656 [2024-07-15 23:56:53.730106] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.656 [2024-07-15 23:56:53.730300] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.656 23:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:38.916 malloc0 00:20:38.916 23:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:38.916 23:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.omXxutgYnr 00:20:39.202 [2024-07-15 23:56:54.181210] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:39.202 23:56:54 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.omXxutgYnr 00:20:49.277 Initializing NVMe Controllers 00:20:49.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:49.277 Initialization complete. Launching workers. 00:20:49.277 ======================================================== 00:20:49.277 Latency(us) 00:20:49.277 Device Information : IOPS MiB/s Average min max 00:20:49.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19002.90 74.23 3367.94 1209.03 4020.78 00:20:49.277 ======================================================== 00:20:49.277 Total : 19002.90 74.23 3367.94 1209.03 4020.78 00:20:49.277 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.omXxutgYnr 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.omXxutgYnr' 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=496346 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 496346 /var/tmp/bdevperf.sock 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 496346 ']' 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:49.277 23:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.277 [2024-07-15 23:57:04.335966] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:20:49.277 [2024-07-15 23:57:04.336023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496346 ] 00:20:49.277 [2024-07-15 23:57:04.391203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.277 [2024-07-15 23:57:04.443838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.220 23:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:50.220 23:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:20:50.220 23:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.omXxutgYnr 00:20:50.220 [2024-07-15 23:57:05.236978] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.220 [2024-07-15 23:57:05.237031] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:50.220 TLSTESTn1 00:20:50.220 23:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:50.482 Running I/O for 10 seconds... 00:21:00.479 00:21:00.479 Latency(us) 00:21:00.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.479 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:00.479 Verification LBA range: start 0x0 length 0x2000 00:21:00.479 TLSTESTn1 : 10.01 5584.79 21.82 0.00 0.00 22886.64 4532.91 72963.41 00:21:00.479 =================================================================================================================== 00:21:00.479 Total : 5584.79 21.82 0.00 0.00 22886.64 4532.91 72963.41 00:21:00.479 0 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 496346 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 496346 ']' 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 496346 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 496346 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 496346' 00:21:00.479 killing process with pid 496346 00:21:00.479 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 496346 00:21:00.479 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.479 00:21:00.479 Latency(us) 00:21:00.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.480 =================================================================================================================== 00:21:00.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.480 [2024-07-15 23:57:15.529834] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 496346 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iEavPzbqjC 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iEavPzbqjC 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iEavPzbqjC 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iEavPzbqjC' 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=499108 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 499108 /var/tmp/bdevperf.sock 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 499108 ']' 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:00.480 23:57:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.739 [2024-07-15 23:57:15.702197] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:00.739 [2024-07-15 23:57:15.702259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499108 ] 00:21:00.739 [2024-07-15 23:57:15.756761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.739 [2024-07-15 23:57:15.807855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.308 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:01.308 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:01.308 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iEavPzbqjC 00:21:01.569 [2024-07-15 23:57:16.608688] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.569 [2024-07-15 23:57:16.608745] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.569 [2024-07-15 23:57:16.619684] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:01.569 [2024-07-15 23:57:16.619724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232ad80 (107): Transport endpoint is not connected 00:21:01.569 [2024-07-15 23:57:16.620700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232ad80 (9): Bad file descriptor 00:21:01.569 [2024-07-15 23:57:16.621702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.569 [2024-07-15 23:57:16.621713] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:01.569 [2024-07-15 23:57:16.621721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.569 request: 00:21:01.569 { 00:21:01.569 "name": "TLSTEST", 00:21:01.569 "trtype": "tcp", 00:21:01.569 "traddr": "10.0.0.2", 00:21:01.569 "adrfam": "ipv4", 00:21:01.569 "trsvcid": "4420", 00:21:01.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.569 "prchk_reftag": false, 00:21:01.569 "prchk_guard": false, 00:21:01.569 "hdgst": false, 00:21:01.569 "ddgst": false, 00:21:01.569 "psk": "/tmp/tmp.iEavPzbqjC", 00:21:01.569 "method": "bdev_nvme_attach_controller", 00:21:01.569 "req_id": 1 00:21:01.569 } 00:21:01.569 Got JSON-RPC error response 00:21:01.569 response: 00:21:01.569 { 00:21:01.569 "code": -5, 00:21:01.569 "message": "Input/output error" 00:21:01.569 } 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 499108 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 499108 ']' 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 499108 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 499108 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 499108' 00:21:01.569 killing process with pid 499108 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 499108 00:21:01.569 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.569 00:21:01.569 Latency(us) 00:21:01.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.569 =================================================================================================================== 00:21:01.569 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.569 [2024-07-15 23:57:16.691787] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:01.569 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 499108 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.omXxutgYnr 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.omXxutgYnr 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.omXxutgYnr 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.omXxutgYnr' 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=499305 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 499305 /var/tmp/bdevperf.sock 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 499305 ']' 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:01.830 23:57:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.830 [2024-07-15 23:57:16.847691] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:01.830 [2024-07-15 23:57:16.847747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499305 ] 00:21:01.830 [2024-07-15 23:57:16.902696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.830 [2024-07-15 23:57:16.954708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.omXxutgYnr 00:21:02.772 [2024-07-15 23:57:17.751750] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.772 [2024-07-15 23:57:17.751812] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:02.772 [2024-07-15 23:57:17.762391] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:02.772 [2024-07-15 23:57:17.762410] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:02.772 [2024-07-15 23:57:17.762429] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:02.772 [2024-07-15 23:57:17.762865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa8d80 (107): Transport endpoint is not connected 00:21:02.772 [2024-07-15 23:57:17.763860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa8d80 (9): Bad file descriptor 00:21:02.772 [2024-07-15 23:57:17.764862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.772 [2024-07-15 23:57:17.764869] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:02.772 [2024-07-15 23:57:17.764876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.772 request: 00:21:02.772 { 00:21:02.772 "name": "TLSTEST", 00:21:02.772 "trtype": "tcp", 00:21:02.772 "traddr": "10.0.0.2", 00:21:02.772 "adrfam": "ipv4", 00:21:02.772 "trsvcid": "4420", 00:21:02.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.772 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:02.772 "prchk_reftag": false, 00:21:02.772 "prchk_guard": false, 00:21:02.772 "hdgst": false, 00:21:02.772 "ddgst": false, 00:21:02.772 "psk": "/tmp/tmp.omXxutgYnr", 00:21:02.772 "method": "bdev_nvme_attach_controller", 00:21:02.772 "req_id": 1 00:21:02.772 } 00:21:02.772 Got JSON-RPC error response 00:21:02.772 response: 00:21:02.772 { 00:21:02.772 "code": -5, 00:21:02.772 "message": "Input/output error" 00:21:02.772 } 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 499305 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 499305 ']' 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 499305 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 499305 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 499305' 00:21:02.772 killing process with pid 499305 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 499305 00:21:02.772 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.772 00:21:02.772 Latency(us) 00:21:02.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.772 =================================================================================================================== 00:21:02.772 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.772 [2024-07-15 23:57:17.839323] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 499305 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.omXxutgYnr 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.omXxutgYnr 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.omXxutgYnr 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.omXxutgYnr' 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=499468 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 499468 /var/tmp/bdevperf.sock 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 499468 ']' 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:02.772 23:57:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.032 [2024-07-15 23:57:18.001429] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:03.032 [2024-07-15 23:57:18.001487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499468 ] 00:21:03.032 [2024-07-15 23:57:18.057954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.032 [2024-07-15 23:57:18.109654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.602 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:03.602 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:03.602 23:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.omXxutgYnr 00:21:03.862 [2024-07-15 23:57:18.898669] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.862 [2024-07-15 23:57:18.898731] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:03.862 [2024-07-15 23:57:18.905162] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:03.862 [2024-07-15 23:57:18.905188] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:03.862 [2024-07-15 23:57:18.905206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:03.862 [2024-07-15 23:57:18.905849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ded80 (107): Transport endpoint is not connected 00:21:03.862 [2024-07-15 23:57:18.906844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ded80 (9): Bad file descriptor 00:21:03.862 [2024-07-15 23:57:18.907845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:03.863 [2024-07-15 23:57:18.907852] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:03.863 [2024-07-15 23:57:18.907859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:03.863 request: 00:21:03.863 { 00:21:03.863 "name": "TLSTEST", 00:21:03.863 "trtype": "tcp", 00:21:03.863 "traddr": "10.0.0.2", 00:21:03.863 "adrfam": "ipv4", 00:21:03.863 "trsvcid": "4420", 00:21:03.863 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:03.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.863 "prchk_reftag": false, 00:21:03.863 "prchk_guard": false, 00:21:03.863 "hdgst": false, 00:21:03.863 "ddgst": false, 00:21:03.863 "psk": "/tmp/tmp.omXxutgYnr", 00:21:03.863 "method": "bdev_nvme_attach_controller", 00:21:03.863 "req_id": 1 00:21:03.863 } 00:21:03.863 Got JSON-RPC error response 00:21:03.863 response: 00:21:03.863 { 00:21:03.863 "code": -5, 00:21:03.863 "message": "Input/output error" 00:21:03.863 } 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 499468 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 499468 ']' 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 499468 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 499468 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 499468' 00:21:03.863 killing process with pid 499468 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 499468 00:21:03.863 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.863 00:21:03.863 Latency(us) 00:21:03.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.863 =================================================================================================================== 00:21:03.863 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.863 [2024-07-15 23:57:18.980674] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:03.863 23:57:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 499468 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=499805 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 499805 /var/tmp/bdevperf.sock 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 499805 ']' 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:04.123 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.123 [2024-07-15 23:57:19.138579] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:04.123 [2024-07-15 23:57:19.138636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499805 ] 00:21:04.123 [2024-07-15 23:57:19.193116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.123 [2024-07-15 23:57:19.245256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.064 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:05.064 23:57:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:05.064 23:57:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:05.064 [2024-07-15 23:57:20.078125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:05.064 [2024-07-15 23:57:20.079815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8c460 (9): Bad file descriptor 00:21:05.064 [2024-07-15 23:57:20.080815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:05.064 [2024-07-15 23:57:20.080827] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:05.064 [2024-07-15 23:57:20.080834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:05.064 request: 00:21:05.064 { 00:21:05.064 "name": "TLSTEST", 00:21:05.064 "trtype": "tcp", 00:21:05.064 "traddr": "10.0.0.2", 00:21:05.064 "adrfam": "ipv4", 00:21:05.064 "trsvcid": "4420", 00:21:05.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.064 "prchk_reftag": false, 00:21:05.064 "prchk_guard": false, 00:21:05.064 "hdgst": false, 00:21:05.064 "ddgst": false, 00:21:05.064 "method": "bdev_nvme_attach_controller", 00:21:05.064 "req_id": 1 00:21:05.064 } 00:21:05.064 Got JSON-RPC error response 00:21:05.064 response: 00:21:05.064 { 00:21:05.064 "code": -5, 00:21:05.064 "message": "Input/output error" 00:21:05.064 } 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 499805 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 499805 ']' 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 499805 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 499805 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 499805' 00:21:05.064 killing process with pid 499805 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 499805 00:21:05.064 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.064 00:21:05.064 Latency(us) 00:21:05.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.064 =================================================================================================================== 00:21:05.064 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.064 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 499805 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 493480 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 493480 ']' 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 493480 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 493480 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 493480' 00:21:05.325 killing process with pid 493480 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 493480 00:21:05.325 [2024-07-15 23:57:20.312187] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 493480 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.dYc23xM5FI 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.dYc23xM5FI 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=500141 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 500141 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 500141 ']' 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:05.325 23:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.586 [2024-07-15 23:57:20.542074] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:05.586 [2024-07-15 23:57:20.542135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.586 [2024-07-15 23:57:20.632220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.586 [2024-07-15 23:57:20.689235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.586 [2024-07-15 23:57:20.689269] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.586 [2024-07-15 23:57:20.689274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.586 [2024-07-15 23:57:20.689278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.586 [2024-07-15 23:57:20.689282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.586 [2024-07-15 23:57:20.689297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.157 23:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:06.157 23:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:06.157 23:57:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.157 23:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.157 23:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.157 23:57:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.157 23:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.dYc23xM5FI 00:21:06.157 23:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dYc23xM5FI 00:21:06.157 23:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:06.417 [2024-07-15 23:57:21.483625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.417 23:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:06.678 23:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:06.678 [2024-07-15 23:57:21.780346] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.678 [2024-07-15 23:57:21.780532] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.678 23:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:06.939 malloc0 00:21:06.939 23:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:06.939 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dYc23xM5FI 00:21:07.201 [2024-07-15 23:57:22.211408] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dYc23xM5FI 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dYc23xM5FI' 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=500484 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 500484 /var/tmp/bdevperf.sock 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 500484 ']' 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:07.201 23:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.201 [2024-07-15 23:57:22.275407] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:07.201 [2024-07-15 23:57:22.275458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500484 ] 00:21:07.201 [2024-07-15 23:57:22.330102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.201 [2024-07-15 23:57:22.382003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.146 23:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:08.146 23:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:08.146 23:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dYc23xM5FI 00:21:08.146 [2024-07-15 23:57:23.166848] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.146 [2024-07-15 23:57:23.166905] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:08.146 TLSTESTn1 00:21:08.146 23:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:08.406 Running I/O for 10 seconds... 00:21:18.402 00:21:18.402 Latency(us) 00:21:18.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.402 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.402 Verification LBA range: start 0x0 length 0x2000 00:21:18.402 TLSTESTn1 : 10.02 4984.65 19.47 0.00 0.00 25641.43 4369.07 98740.91 00:21:18.402 =================================================================================================================== 00:21:18.402 Total : 4984.65 19.47 0.00 0.00 25641.43 4369.07 98740.91 00:21:18.402 0 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 500484 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 500484 ']' 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 500484 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 500484 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 500484' 00:21:18.402 killing process with pid 500484 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 500484 00:21:18.402 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.402 00:21:18.402 Latency(us) 00:21:18.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.402 =================================================================================================================== 00:21:18.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.402 [2024-07-15 23:57:33.478473] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 500484 00:21:18.402 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.dYc23xM5FI 00:21:18.662 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dYc23xM5FI 00:21:18.662 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:21:18.662 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dYc23xM5FI 00:21:18.662 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:21:18.662 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:18.662 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:21:18.662 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:18.662 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dYc23xM5FI 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dYc23xM5FI' 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=502535 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 502535 /var/tmp/bdevperf.sock 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 502535 ']' 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:18.663 23:57:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.663 [2024-07-15 23:57:33.656174] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:18.663 [2024-07-15 23:57:33.656239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid502535 ] 00:21:18.663 [2024-07-15 23:57:33.711013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.663 [2024-07-15 23:57:33.762261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.233 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:19.234 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:19.234 23:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dYc23xM5FI 00:21:19.494 [2024-07-15 23:57:34.547217] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.494 [2024-07-15 23:57:34.547259] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:19.494 [2024-07-15 23:57:34.547264] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.dYc23xM5FI 00:21:19.494 request: 00:21:19.494 { 00:21:19.494 "name": "TLSTEST", 00:21:19.494 "trtype": "tcp", 00:21:19.494 "traddr": "10.0.0.2", 00:21:19.494 "adrfam": "ipv4", 00:21:19.494 "trsvcid": "4420", 00:21:19.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.494 "prchk_reftag": false, 00:21:19.494 "prchk_guard": false, 00:21:19.494 "hdgst": false, 00:21:19.494 "ddgst": false, 00:21:19.494 "psk": "/tmp/tmp.dYc23xM5FI", 00:21:19.494 "method": "bdev_nvme_attach_controller", 00:21:19.494 "req_id": 1 00:21:19.494 } 00:21:19.494 Got JSON-RPC error response 00:21:19.494 response: 00:21:19.494 { 00:21:19.494 "code": -1, 00:21:19.494 "message": "Operation not permitted" 00:21:19.494 } 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 502535 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 502535 ']' 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 502535 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 502535 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 502535' 00:21:19.494 killing process with pid 502535 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 502535 00:21:19.494 Received shutdown signal, test time was about 10.000000 seconds 00:21:19.494 00:21:19.494 Latency(us) 00:21:19.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.494 =================================================================================================================== 00:21:19.494 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.494 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 502535 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 500141 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 500141 ']' 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 500141 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 500141 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 500141' 00:21:19.754 killing process with pid 500141 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 500141 00:21:19.754 [2024-07-15 23:57:34.777192] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 500141 00:21:19.754 23:57:34 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=502883 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 502883 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 502883 ']' 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:19.755 23:57:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.014 [2024-07-15 23:57:34.963634] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:20.014 [2024-07-15 23:57:34.963688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.014 [2024-07-15 23:57:35.054641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.014 [2024-07-15 23:57:35.107705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.015 [2024-07-15 23:57:35.107741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.015 [2024-07-15 23:57:35.107746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.015 [2024-07-15 23:57:35.107750] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.015 [2024-07-15 23:57:35.107754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.015 [2024-07-15 23:57:35.107778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.dYc23xM5FI 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.dYc23xM5FI 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=setup_nvmf_tgt 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t setup_nvmf_tgt 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # setup_nvmf_tgt /tmp/tmp.dYc23xM5FI 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dYc23xM5FI 00:21:20.584 23:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:20.844 [2024-07-15 23:57:35.901815] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.844 23:57:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:21.103 23:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:21.103 [2024-07-15 23:57:36.198530] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.103 [2024-07-15 23:57:36.198704] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.103 23:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:21.362 malloc0 00:21:21.362 23:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:21.362 23:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dYc23xM5FI 00:21:21.622 [2024-07-15 23:57:36.621351] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:21.622 [2024-07-15 23:57:36.621370] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:21.622 [2024-07-15 23:57:36.621389] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:21.622 request: 00:21:21.622 { 00:21:21.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.622 "host": "nqn.2016-06.io.spdk:host1", 00:21:21.622 "psk": "/tmp/tmp.dYc23xM5FI", 00:21:21.622 "method": "nvmf_subsystem_add_host", 00:21:21.622 "req_id": 1 00:21:21.622 } 00:21:21.622 Got JSON-RPC error response 00:21:21.622 response: 00:21:21.622 { 00:21:21.622 "code": -32603, 00:21:21.622 "message": "Internal error" 00:21:21.622 } 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 502883 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 502883 ']' 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 502883 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 502883 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 502883' 00:21:21.622 killing process with pid 502883 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 502883 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 502883 00:21:21.622 23:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.dYc23xM5FI 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=503256 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 503256 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 503256 ']' 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:21.881 23:57:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.882 [2024-07-15 23:57:36.871377] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:21.882 [2024-07-15 23:57:36.871457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.882 [2024-07-15 23:57:36.965915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.882 [2024-07-15 23:57:37.020567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.882 [2024-07-15 23:57:37.020600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.882 [2024-07-15 23:57:37.020605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.882 [2024-07-15 23:57:37.020610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.882 [2024-07-15 23:57:37.020614] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.882 [2024-07-15 23:57:37.020628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.449 23:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:22.449 23:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:22.449 23:57:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:22.449 23:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.449 23:57:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.709 23:57:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.709 23:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.dYc23xM5FI 00:21:22.709 23:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dYc23xM5FI 00:21:22.709 23:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:22.709 [2024-07-15 23:57:37.802809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.709 23:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:22.968 23:57:37 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:22.969 [2024-07-15 23:57:38.099529] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.969 [2024-07-15 23:57:38.099708] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.969 23:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:23.228 malloc0 00:21:23.228 23:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:23.228 23:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dYc23xM5FI 00:21:23.488 [2024-07-15 23:57:38.534612] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=503613 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 503613 /var/tmp/bdevperf.sock 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 503613 ']' 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:23.488 23:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.488 [2024-07-15 23:57:38.609823] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:23.488 [2024-07-15 23:57:38.609873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503613 ] 00:21:23.488 [2024-07-15 23:57:38.664741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.748 [2024-07-15 23:57:38.717022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.319 23:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:24.319 23:57:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:24.319 23:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dYc23xM5FI 00:21:24.319 [2024-07-15 23:57:39.497980] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.319 [2024-07-15 23:57:39.498037] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:24.579 TLSTESTn1 00:21:24.579 23:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:24.840 23:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:24.840 "subsystems": [ 00:21:24.840 { 00:21:24.840 "subsystem": "keyring", 00:21:24.840 "config": [] 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "subsystem": "iobuf", 00:21:24.840 "config": [ 00:21:24.840 { 00:21:24.840 "method": "iobuf_set_options", 00:21:24.840 "params": { 00:21:24.840 "small_pool_count": 8192, 00:21:24.840 "large_pool_count": 1024, 00:21:24.840 "small_bufsize": 8192, 00:21:24.840 "large_bufsize": 135168 00:21:24.840 } 00:21:24.840 } 00:21:24.840 ] 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "subsystem": "sock", 00:21:24.840 "config": [ 00:21:24.840 { 00:21:24.840 "method": "sock_set_default_impl", 00:21:24.840 "params": { 00:21:24.840 "impl_name": "posix" 00:21:24.840 } 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "method": "sock_impl_set_options", 00:21:24.840 "params": { 00:21:24.840 "impl_name": "ssl", 00:21:24.840 "recv_buf_size": 4096, 00:21:24.840 "send_buf_size": 4096, 00:21:24.840 "enable_recv_pipe": true, 00:21:24.840 "enable_quickack": false, 00:21:24.840 "enable_placement_id": 0, 00:21:24.840 "enable_zerocopy_send_server": true, 00:21:24.840 "enable_zerocopy_send_client": false, 00:21:24.840 "zerocopy_threshold": 0, 00:21:24.840 "tls_version": 0, 00:21:24.840 "enable_ktls": false 00:21:24.840 } 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "method": "sock_impl_set_options", 00:21:24.840 "params": { 00:21:24.840 "impl_name": "posix", 00:21:24.840 "recv_buf_size": 2097152, 00:21:24.840 "send_buf_size": 2097152, 00:21:24.840 "enable_recv_pipe": true, 00:21:24.840 "enable_quickack": false, 00:21:24.840 "enable_placement_id": 0, 00:21:24.840 "enable_zerocopy_send_server": true, 00:21:24.840 "enable_zerocopy_send_client": false, 00:21:24.840 "zerocopy_threshold": 0, 00:21:24.840 "tls_version": 0, 00:21:24.840 "enable_ktls": false 00:21:24.840 } 00:21:24.840 } 00:21:24.840 ] 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "subsystem": "vmd", 00:21:24.840 "config": [] 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "subsystem": "accel", 00:21:24.840 "config": [ 00:21:24.840 { 00:21:24.840 "method": "accel_set_options", 00:21:24.840 "params": { 00:21:24.840 "small_cache_size": 128, 00:21:24.840 "large_cache_size": 16, 00:21:24.840 "task_count": 2048, 00:21:24.840 "sequence_count": 2048, 00:21:24.840 "buf_count": 2048 00:21:24.840 } 00:21:24.840 } 00:21:24.840 ] 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "subsystem": "bdev", 00:21:24.840 "config": [ 00:21:24.840 { 00:21:24.840 "method": "bdev_set_options", 00:21:24.840 "params": { 00:21:24.840 "bdev_io_pool_size": 65535, 00:21:24.840 "bdev_io_cache_size": 256, 00:21:24.840 "bdev_auto_examine": true, 00:21:24.840 "iobuf_small_cache_size": 128, 00:21:24.840 "iobuf_large_cache_size": 16 00:21:24.840 } 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "method": "bdev_raid_set_options", 00:21:24.840 "params": { 00:21:24.840 "process_window_size_kb": 1024 00:21:24.840 } 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "method": "bdev_iscsi_set_options", 00:21:24.840 "params": { 00:21:24.840 "timeout_sec": 30 00:21:24.840 } 00:21:24.840 }, 00:21:24.840 { 00:21:24.840 "method": "bdev_nvme_set_options", 00:21:24.840 "params": { 00:21:24.840 "action_on_timeout": "none", 00:21:24.840 "timeout_us": 0, 00:21:24.840 "timeout_admin_us": 0, 00:21:24.840 "keep_alive_timeout_ms": 10000, 00:21:24.840 "arbitration_burst": 0, 00:21:24.840 "low_priority_weight": 0, 00:21:24.840 "medium_priority_weight": 0, 00:21:24.840 "high_priority_weight": 0, 00:21:24.840 "nvme_adminq_poll_period_us": 10000, 00:21:24.840 "nvme_ioq_poll_period_us": 0, 00:21:24.840 "io_queue_requests": 0, 00:21:24.840 "delay_cmd_submit": true, 00:21:24.840 "transport_retry_count": 4, 00:21:24.840 "bdev_retry_count": 3, 00:21:24.840 "transport_ack_timeout": 0, 00:21:24.840 "ctrlr_loss_timeout_sec": 0, 00:21:24.840 "reconnect_delay_sec": 0, 00:21:24.840 "fast_io_fail_timeout_sec": 0, 00:21:24.840 "disable_auto_failback": false, 00:21:24.840 "generate_uuids": false, 00:21:24.840 "transport_tos": 0, 00:21:24.840 "nvme_error_stat": false, 00:21:24.840 "rdma_srq_size": 0, 00:21:24.840 "io_path_stat": false, 00:21:24.840 "allow_accel_sequence": false, 00:21:24.840 "rdma_max_cq_size": 0, 00:21:24.840 "rdma_cm_event_timeout_ms": 0, 00:21:24.840 "dhchap_digests": [ 00:21:24.841 "sha256", 00:21:24.841 "sha384", 00:21:24.841 "sha512" 00:21:24.841 ], 00:21:24.841 "dhchap_dhgroups": [ 00:21:24.841 "null", 00:21:24.841 "ffdhe2048", 00:21:24.841 "ffdhe3072", 00:21:24.841 "ffdhe4096", 00:21:24.841 "ffdhe6144", 00:21:24.841 "ffdhe8192" 00:21:24.841 ] 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "bdev_nvme_set_hotplug", 00:21:24.841 "params": { 00:21:24.841 "period_us": 100000, 00:21:24.841 "enable": false 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "bdev_malloc_create", 00:21:24.841 "params": { 00:21:24.841 "name": "malloc0", 00:21:24.841 "num_blocks": 8192, 00:21:24.841 "block_size": 4096, 00:21:24.841 "physical_block_size": 4096, 00:21:24.841 "uuid": "e4f16dca-70ff-4f1b-ac9b-061b07375a0a", 00:21:24.841 "optimal_io_boundary": 0 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "bdev_wait_for_examine" 00:21:24.841 } 00:21:24.841 ] 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "subsystem": "nbd", 00:21:24.841 "config": [] 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "subsystem": "scheduler", 00:21:24.841 "config": [ 00:21:24.841 { 00:21:24.841 "method": "framework_set_scheduler", 00:21:24.841 "params": { 00:21:24.841 "name": "static" 00:21:24.841 } 00:21:24.841 } 00:21:24.841 ] 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "subsystem": "nvmf", 00:21:24.841 "config": [ 00:21:24.841 { 00:21:24.841 "method": "nvmf_set_config", 00:21:24.841 "params": { 00:21:24.841 "discovery_filter": "match_any", 00:21:24.841 "admin_cmd_passthru": { 00:21:24.841 "identify_ctrlr": false 00:21:24.841 } 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "nvmf_set_max_subsystems", 00:21:24.841 "params": { 00:21:24.841 "max_subsystems": 1024 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "nvmf_set_crdt", 00:21:24.841 "params": { 00:21:24.841 "crdt1": 0, 00:21:24.841 "crdt2": 0, 00:21:24.841 "crdt3": 0 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "nvmf_create_transport", 00:21:24.841 "params": { 00:21:24.841 "trtype": "TCP", 00:21:24.841 "max_queue_depth": 128, 00:21:24.841 "max_io_qpairs_per_ctrlr": 127, 00:21:24.841 "in_capsule_data_size": 4096, 00:21:24.841 "max_io_size": 131072, 00:21:24.841 "io_unit_size": 131072, 00:21:24.841 "max_aq_depth": 128, 00:21:24.841 "num_shared_buffers": 511, 00:21:24.841 "buf_cache_size": 4294967295, 00:21:24.841 "dif_insert_or_strip": false, 00:21:24.841 "zcopy": false, 00:21:24.841 "c2h_success": false, 00:21:24.841 "sock_priority": 0, 00:21:24.841 "abort_timeout_sec": 1, 00:21:24.841 "ack_timeout": 0, 00:21:24.841 "data_wr_pool_size": 0 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "nvmf_create_subsystem", 00:21:24.841 "params": { 00:21:24.841 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.841 "allow_any_host": false, 00:21:24.841 "serial_number": "SPDK00000000000001", 00:21:24.841 "model_number": "SPDK bdev Controller", 00:21:24.841 "max_namespaces": 10, 00:21:24.841 "min_cntlid": 1, 00:21:24.841 "max_cntlid": 65519, 00:21:24.841 "ana_reporting": false 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "nvmf_subsystem_add_host", 00:21:24.841 "params": { 00:21:24.841 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.841 "host": "nqn.2016-06.io.spdk:host1", 00:21:24.841 "psk": "/tmp/tmp.dYc23xM5FI" 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "nvmf_subsystem_add_ns", 00:21:24.841 "params": { 00:21:24.841 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.841 "namespace": { 00:21:24.841 "nsid": 1, 00:21:24.841 "bdev_name": "malloc0", 00:21:24.841 "nguid": "E4F16DCA70FF4F1BAC9B061B07375A0A", 00:21:24.841 "uuid": "e4f16dca-70ff-4f1b-ac9b-061b07375a0a", 00:21:24.841 "no_auto_visible": false 00:21:24.841 } 00:21:24.841 } 00:21:24.841 }, 00:21:24.841 { 00:21:24.841 "method": "nvmf_subsystem_add_listener", 00:21:24.841 "params": { 00:21:24.841 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.841 "listen_address": { 00:21:24.841 "trtype": "TCP", 00:21:24.841 "adrfam": "IPv4", 00:21:24.841 "traddr": "10.0.0.2", 00:21:24.841 "trsvcid": "4420" 00:21:24.841 }, 00:21:24.841 "secure_channel": true 00:21:24.841 } 00:21:24.841 } 00:21:24.841 ] 00:21:24.841 } 00:21:24.841 ] 00:21:24.841 }' 00:21:24.841 23:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:25.102 23:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:25.103 "subsystems": [ 00:21:25.103 { 00:21:25.103 "subsystem": "keyring", 00:21:25.103 "config": [] 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "subsystem": "iobuf", 00:21:25.103 "config": [ 00:21:25.103 { 00:21:25.103 "method": "iobuf_set_options", 00:21:25.103 "params": { 00:21:25.103 "small_pool_count": 8192, 00:21:25.103 "large_pool_count": 1024, 00:21:25.103 "small_bufsize": 8192, 00:21:25.103 "large_bufsize": 135168 00:21:25.103 } 00:21:25.103 } 00:21:25.103 ] 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "subsystem": "sock", 00:21:25.103 "config": [ 00:21:25.103 { 00:21:25.103 "method": "sock_set_default_impl", 00:21:25.103 "params": { 00:21:25.103 "impl_name": "posix" 00:21:25.103 } 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "method": "sock_impl_set_options", 00:21:25.103 "params": { 00:21:25.103 "impl_name": "ssl", 00:21:25.103 "recv_buf_size": 4096, 00:21:25.103 "send_buf_size": 4096, 00:21:25.103 "enable_recv_pipe": true, 00:21:25.103 "enable_quickack": false, 00:21:25.103 "enable_placement_id": 0, 00:21:25.103 "enable_zerocopy_send_server": true, 00:21:25.103 "enable_zerocopy_send_client": false, 00:21:25.103 "zerocopy_threshold": 0, 00:21:25.103 "tls_version": 0, 00:21:25.103 "enable_ktls": false 00:21:25.103 } 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "method": "sock_impl_set_options", 00:21:25.103 "params": { 00:21:25.103 "impl_name": "posix", 00:21:25.103 "recv_buf_size": 2097152, 00:21:25.103 "send_buf_size": 2097152, 00:21:25.103 "enable_recv_pipe": true, 00:21:25.103 "enable_quickack": false, 00:21:25.103 "enable_placement_id": 0, 00:21:25.103 "enable_zerocopy_send_server": true, 00:21:25.103 "enable_zerocopy_send_client": false, 00:21:25.103 "zerocopy_threshold": 0, 00:21:25.103 "tls_version": 0, 00:21:25.103 "enable_ktls": false 00:21:25.103 } 00:21:25.103 } 00:21:25.103 ] 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "subsystem": "vmd", 00:21:25.103 "config": [] 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "subsystem": "accel", 00:21:25.103 "config": [ 00:21:25.103 { 00:21:25.103 "method": "accel_set_options", 00:21:25.103 "params": { 00:21:25.103 "small_cache_size": 128, 00:21:25.103 "large_cache_size": 16, 00:21:25.103 "task_count": 2048, 00:21:25.103 "sequence_count": 2048, 00:21:25.103 "buf_count": 2048 00:21:25.103 } 00:21:25.103 } 00:21:25.103 ] 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "subsystem": "bdev", 00:21:25.103 "config": [ 00:21:25.103 { 00:21:25.103 "method": "bdev_set_options", 00:21:25.103 "params": { 00:21:25.103 "bdev_io_pool_size": 65535, 00:21:25.103 "bdev_io_cache_size": 256, 00:21:25.103 "bdev_auto_examine": true, 00:21:25.103 "iobuf_small_cache_size": 128, 00:21:25.103 "iobuf_large_cache_size": 16 00:21:25.103 } 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "method": "bdev_raid_set_options", 00:21:25.103 "params": { 00:21:25.103 "process_window_size_kb": 1024 00:21:25.103 } 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "method": "bdev_iscsi_set_options", 00:21:25.103 "params": { 00:21:25.103 "timeout_sec": 30 00:21:25.103 } 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "method": "bdev_nvme_set_options", 00:21:25.103 "params": { 00:21:25.103 "action_on_timeout": "none", 00:21:25.103 "timeout_us": 0, 00:21:25.103 "timeout_admin_us": 0, 00:21:25.103 "keep_alive_timeout_ms": 10000, 00:21:25.103 "arbitration_burst": 0, 00:21:25.103 "low_priority_weight": 0, 00:21:25.103 "medium_priority_weight": 0, 00:21:25.103 "high_priority_weight": 0, 00:21:25.103 "nvme_adminq_poll_period_us": 10000, 00:21:25.103 "nvme_ioq_poll_period_us": 0, 00:21:25.103 "io_queue_requests": 512, 00:21:25.103 "delay_cmd_submit": true, 00:21:25.103 "transport_retry_count": 4, 00:21:25.103 "bdev_retry_count": 3, 00:21:25.103 "transport_ack_timeout": 0, 00:21:25.103 "ctrlr_loss_timeout_sec": 0, 00:21:25.103 "reconnect_delay_sec": 0, 00:21:25.103 "fast_io_fail_timeout_sec": 0, 00:21:25.103 "disable_auto_failback": false, 00:21:25.103 "generate_uuids": false, 00:21:25.103 "transport_tos": 0, 00:21:25.103 "nvme_error_stat": false, 00:21:25.103 "rdma_srq_size": 0, 00:21:25.103 "io_path_stat": false, 00:21:25.103 "allow_accel_sequence": false, 00:21:25.103 "rdma_max_cq_size": 0, 00:21:25.103 "rdma_cm_event_timeout_ms": 0, 00:21:25.103 "dhchap_digests": [ 00:21:25.103 "sha256", 00:21:25.103 "sha384", 00:21:25.103 "sha512" 00:21:25.103 ], 00:21:25.103 "dhchap_dhgroups": [ 00:21:25.103 "null", 00:21:25.103 "ffdhe2048", 00:21:25.103 "ffdhe3072", 00:21:25.103 "ffdhe4096", 00:21:25.103 "ffdhe6144", 00:21:25.103 "ffdhe8192" 00:21:25.103 ] 00:21:25.103 } 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "method": "bdev_nvme_attach_controller", 00:21:25.103 "params": { 00:21:25.103 "name": "TLSTEST", 00:21:25.103 "trtype": "TCP", 00:21:25.103 "adrfam": "IPv4", 00:21:25.103 "traddr": "10.0.0.2", 00:21:25.103 "trsvcid": "4420", 00:21:25.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.103 "prchk_reftag": false, 00:21:25.103 "prchk_guard": false, 00:21:25.103 "ctrlr_loss_timeout_sec": 0, 00:21:25.103 "reconnect_delay_sec": 0, 00:21:25.103 "fast_io_fail_timeout_sec": 0, 00:21:25.103 "psk": "/tmp/tmp.dYc23xM5FI", 00:21:25.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.103 "hdgst": false, 00:21:25.103 "ddgst": false 00:21:25.103 } 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "method": "bdev_nvme_set_hotplug", 00:21:25.103 "params": { 00:21:25.103 "period_us": 100000, 00:21:25.103 "enable": false 00:21:25.103 } 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "method": "bdev_wait_for_examine" 00:21:25.103 } 00:21:25.103 ] 00:21:25.103 }, 00:21:25.103 { 00:21:25.103 "subsystem": "nbd", 00:21:25.103 "config": [] 00:21:25.103 } 00:21:25.103 ] 00:21:25.103 }' 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 503613 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 503613 ']' 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 503613 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 503613 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 503613' 00:21:25.103 killing process with pid 503613 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 503613 00:21:25.103 Received shutdown signal, test time was about 10.000000 seconds 00:21:25.103 00:21:25.103 Latency(us) 00:21:25.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.103 =================================================================================================================== 00:21:25.103 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:25.103 [2024-07-15 23:57:40.146352] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 503613 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 503256 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 503256 ']' 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 503256 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:25.103 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 503256 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 503256' 00:21:25.390 killing process with pid 503256 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 503256 00:21:25.390 [2024-07-15 23:57:40.311990] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 503256 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.390 23:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:25.390 "subsystems": [ 00:21:25.390 { 00:21:25.390 "subsystem": "keyring", 00:21:25.390 "config": [] 00:21:25.390 }, 00:21:25.390 { 00:21:25.390 "subsystem": "iobuf", 00:21:25.390 "config": [ 00:21:25.390 { 00:21:25.390 "method": "iobuf_set_options", 00:21:25.390 "params": { 00:21:25.390 "small_pool_count": 8192, 00:21:25.390 "large_pool_count": 1024, 00:21:25.391 "small_bufsize": 8192, 00:21:25.391 "large_bufsize": 135168 00:21:25.391 } 00:21:25.391 } 00:21:25.391 ] 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "subsystem": "sock", 00:21:25.391 "config": [ 00:21:25.391 { 00:21:25.391 "method": "sock_set_default_impl", 00:21:25.391 "params": { 00:21:25.391 "impl_name": "posix" 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "sock_impl_set_options", 00:21:25.391 "params": { 00:21:25.391 "impl_name": "ssl", 00:21:25.391 "recv_buf_size": 4096, 00:21:25.391 "send_buf_size": 4096, 00:21:25.391 "enable_recv_pipe": true, 00:21:25.391 "enable_quickack": false, 00:21:25.391 "enable_placement_id": 0, 00:21:25.391 "enable_zerocopy_send_server": true, 00:21:25.391 "enable_zerocopy_send_client": false, 00:21:25.391 "zerocopy_threshold": 0, 00:21:25.391 "tls_version": 0, 00:21:25.391 "enable_ktls": false 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "sock_impl_set_options", 00:21:25.391 "params": { 00:21:25.391 "impl_name": "posix", 00:21:25.391 "recv_buf_size": 2097152, 00:21:25.391 "send_buf_size": 2097152, 00:21:25.391 "enable_recv_pipe": true, 00:21:25.391 "enable_quickack": false, 00:21:25.391 "enable_placement_id": 0, 00:21:25.391 "enable_zerocopy_send_server": true, 00:21:25.391 "enable_zerocopy_send_client": false, 00:21:25.391 "zerocopy_threshold": 0, 00:21:25.391 "tls_version": 0, 00:21:25.391 "enable_ktls": false 00:21:25.391 } 00:21:25.391 } 00:21:25.391 ] 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "subsystem": "vmd", 00:21:25.391 "config": [] 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "subsystem": "accel", 00:21:25.391 "config": [ 00:21:25.391 { 00:21:25.391 "method": "accel_set_options", 00:21:25.391 "params": { 00:21:25.391 "small_cache_size": 128, 00:21:25.391 "large_cache_size": 16, 00:21:25.391 "task_count": 2048, 00:21:25.391 "sequence_count": 2048, 00:21:25.391 "buf_count": 2048 00:21:25.391 } 00:21:25.391 } 00:21:25.391 ] 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "subsystem": "bdev", 00:21:25.391 "config": [ 00:21:25.391 { 00:21:25.391 "method": "bdev_set_options", 00:21:25.391 "params": { 00:21:25.391 "bdev_io_pool_size": 65535, 00:21:25.391 "bdev_io_cache_size": 256, 00:21:25.391 "bdev_auto_examine": true, 00:21:25.391 "iobuf_small_cache_size": 128, 00:21:25.391 "iobuf_large_cache_size": 16 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "bdev_raid_set_options", 00:21:25.391 "params": { 00:21:25.391 "process_window_size_kb": 1024 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "bdev_iscsi_set_options", 00:21:25.391 "params": { 00:21:25.391 "timeout_sec": 30 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "bdev_nvme_set_options", 00:21:25.391 "params": { 00:21:25.391 "action_on_timeout": "none", 00:21:25.391 "timeout_us": 0, 00:21:25.391 "timeout_admin_us": 0, 00:21:25.391 "keep_alive_timeout_ms": 10000, 00:21:25.391 "arbitration_burst": 0, 00:21:25.391 "low_priority_weight": 0, 00:21:25.391 "medium_priority_weight": 0, 00:21:25.391 "high_priority_weight": 0, 00:21:25.391 "nvme_adminq_poll_period_us": 10000, 00:21:25.391 "nvme_ioq_poll_period_us": 0, 00:21:25.391 "io_queue_requests": 0, 00:21:25.391 "delay_cmd_submit": true, 00:21:25.391 "transport_retry_count": 4, 00:21:25.391 "bdev_retry_count": 3, 00:21:25.391 "transport_ack_timeout": 0, 00:21:25.391 "ctrlr_loss_timeout_sec": 0, 00:21:25.391 "reconnect_delay_sec": 0, 00:21:25.391 "fast_io_fail_timeout_sec": 0, 00:21:25.391 "disable_auto_failback": false, 00:21:25.391 "generate_uuids": false, 00:21:25.391 "transport_tos": 0, 00:21:25.391 "nvme_error_stat": false, 00:21:25.391 "rdma_srq_size": 0, 00:21:25.391 "io_path_stat": false, 00:21:25.391 "allow_accel_sequence": false, 00:21:25.391 "rdma_max_cq_size": 0, 00:21:25.391 "rdma_cm_event_timeout_ms": 0, 00:21:25.391 "dhchap_digests": [ 00:21:25.391 "sha256", 00:21:25.391 "sha384", 00:21:25.391 "sha512" 00:21:25.391 ], 00:21:25.391 "dhchap_dhgroups": [ 00:21:25.391 "null", 00:21:25.391 "ffdhe2048", 00:21:25.391 "ffdhe3072", 00:21:25.391 "ffdhe4096", 00:21:25.391 "ffdhe6144", 00:21:25.391 "ffdhe8192" 00:21:25.391 ] 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "bdev_nvme_set_hotplug", 00:21:25.391 "params": { 00:21:25.391 "period_us": 100000, 00:21:25.391 "enable": false 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "bdev_malloc_create", 00:21:25.391 "params": { 00:21:25.391 "name": "malloc0", 00:21:25.391 "num_blocks": 8192, 00:21:25.391 "block_size": 4096, 00:21:25.391 "physical_block_size": 4096, 00:21:25.391 "uuid": "e4f16dca-70ff-4f1b-ac9b-061b07375a0a", 00:21:25.391 "optimal_io_boundary": 0 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "bdev_wait_for_examine" 00:21:25.391 } 00:21:25.391 ] 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "subsystem": "nbd", 00:21:25.391 "config": [] 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "subsystem": "scheduler", 00:21:25.391 "config": [ 00:21:25.391 { 00:21:25.391 "method": "framework_set_scheduler", 00:21:25.391 "params": { 00:21:25.391 "name": "static" 00:21:25.391 } 00:21:25.391 } 00:21:25.391 ] 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "subsystem": "nvmf", 00:21:25.391 "config": [ 00:21:25.391 { 00:21:25.391 "method": "nvmf_set_config", 00:21:25.391 "params": { 00:21:25.391 "discovery_filter": "match_any", 00:21:25.391 "admin_cmd_passthru": { 00:21:25.391 "identify_ctrlr": false 00:21:25.391 } 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "nvmf_set_max_subsystems", 00:21:25.391 "params": { 00:21:25.391 "max_subsystems": 1024 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "nvmf_set_crdt", 00:21:25.391 "params": { 00:21:25.391 "crdt1": 0, 00:21:25.391 "crdt2": 0, 00:21:25.391 "crdt3": 0 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "nvmf_create_transport", 00:21:25.391 "params": { 00:21:25.391 "trtype": "TCP", 00:21:25.391 "max_queue_depth": 128, 00:21:25.391 "max_io_qpairs_per_ctrlr": 127, 00:21:25.391 "in_capsule_data_size": 4096, 00:21:25.391 "max_io_size": 131072, 00:21:25.391 "io_unit_size": 131072, 00:21:25.391 "max_aq_depth": 128, 00:21:25.391 "num_shared_buffers": 511, 00:21:25.391 "buf_cache_size": 4294967295, 00:21:25.391 "dif_insert_or_strip": false, 00:21:25.391 "zcopy": false, 00:21:25.391 "c2h_success": false, 00:21:25.391 "sock_priority": 0, 00:21:25.391 "abort_timeout_sec": 1, 00:21:25.391 "ack_timeout": 0, 00:21:25.391 "data_wr_pool_size": 0 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "nvmf_create_subsystem", 00:21:25.391 "params": { 00:21:25.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.391 "allow_any_host": false, 00:21:25.391 "serial_number": "SPDK00000000000001", 00:21:25.391 "model_number": "SPDK bdev Controller", 00:21:25.391 "max_namespaces": 10, 00:21:25.391 "min_cntlid": 1, 00:21:25.391 "max_cntlid": 65519, 00:21:25.391 "ana_reporting": false 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "nvmf_subsystem_add_host", 00:21:25.391 "params": { 00:21:25.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.391 "host": "nqn.2016-06.io.spdk:host1", 00:21:25.391 "psk": "/tmp/tmp.dYc23xM5FI" 00:21:25.391 } 00:21:25.391 }, 00:21:25.391 { 00:21:25.391 "method": "nvmf_subsystem_add_ns", 00:21:25.391 "params": { 00:21:25.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.391 "namespace": { 00:21:25.391 "nsid": 1, 00:21:25.391 "bdev_name": "malloc0", 00:21:25.391 "nguid": "E4F16DCA70FF4F1BAC9B061B07375A0A", 00:21:25.391 "uuid": "e4f16dca-70ff-4f1b-ac9b-061b07375a0a", 00:21:25.391 "no_auto_visible": false 00:21:25.392 } 00:21:25.392 } 00:21:25.392 }, 00:21:25.392 { 00:21:25.392 "method": "nvmf_subsystem_add_listener", 00:21:25.392 "params": { 00:21:25.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.392 "listen_address": { 00:21:25.392 "trtype": "TCP", 00:21:25.392 "adrfam": "IPv4", 00:21:25.392 "traddr": "10.0.0.2", 00:21:25.392 "trsvcid": "4420" 00:21:25.392 }, 00:21:25.392 "secure_channel": true 00:21:25.392 } 00:21:25.392 } 00:21:25.392 ] 00:21:25.392 } 00:21:25.392 ] 00:21:25.392 }' 00:21:25.392 23:57:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:25.392 23:57:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=503969 00:21:25.392 23:57:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 503969 00:21:25.392 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 503969 ']' 00:21:25.392 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.392 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:25.392 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.392 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:25.392 23:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.392 [2024-07-15 23:57:40.472961] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:25.392 [2024-07-15 23:57:40.473016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.652 [2024-07-15 23:57:40.558703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.652 [2024-07-15 23:57:40.612114] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.652 [2024-07-15 23:57:40.612142] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.652 [2024-07-15 23:57:40.612147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.652 [2024-07-15 23:57:40.612152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.652 [2024-07-15 23:57:40.612156] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.652 [2024-07-15 23:57:40.612198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.652 [2024-07-15 23:57:40.796128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.652 [2024-07-15 23:57:40.812099] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:25.652 [2024-07-15 23:57:40.828151] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.652 [2024-07-15 23:57:40.838556] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=504304 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 504304 /var/tmp/bdevperf.sock 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 504304 ']' 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.294 23:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:26.294 "subsystems": [ 00:21:26.294 { 00:21:26.294 "subsystem": "keyring", 00:21:26.294 "config": [] 00:21:26.294 }, 00:21:26.294 { 00:21:26.294 "subsystem": "iobuf", 00:21:26.294 "config": [ 00:21:26.294 { 00:21:26.294 "method": "iobuf_set_options", 00:21:26.294 "params": { 00:21:26.294 "small_pool_count": 8192, 00:21:26.294 "large_pool_count": 1024, 00:21:26.294 "small_bufsize": 8192, 00:21:26.294 "large_bufsize": 135168 00:21:26.294 } 00:21:26.295 } 00:21:26.295 ] 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "subsystem": "sock", 00:21:26.295 "config": [ 00:21:26.295 { 00:21:26.295 "method": "sock_set_default_impl", 00:21:26.295 "params": { 00:21:26.295 "impl_name": "posix" 00:21:26.295 } 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "method": "sock_impl_set_options", 00:21:26.295 "params": { 00:21:26.295 "impl_name": "ssl", 00:21:26.295 "recv_buf_size": 4096, 00:21:26.295 "send_buf_size": 4096, 00:21:26.295 "enable_recv_pipe": true, 00:21:26.295 "enable_quickack": false, 00:21:26.295 "enable_placement_id": 0, 00:21:26.295 "enable_zerocopy_send_server": true, 00:21:26.295 "enable_zerocopy_send_client": false, 00:21:26.295 "zerocopy_threshold": 0, 00:21:26.295 "tls_version": 0, 00:21:26.295 "enable_ktls": false 00:21:26.295 } 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "method": "sock_impl_set_options", 00:21:26.295 "params": { 00:21:26.295 "impl_name": "posix", 00:21:26.295 "recv_buf_size": 2097152, 00:21:26.295 "send_buf_size": 2097152, 00:21:26.295 "enable_recv_pipe": true, 00:21:26.295 "enable_quickack": false, 00:21:26.295 "enable_placement_id": 0, 00:21:26.295 "enable_zerocopy_send_server": true, 00:21:26.295 "enable_zerocopy_send_client": false, 00:21:26.295 "zerocopy_threshold": 0, 00:21:26.295 "tls_version": 0, 00:21:26.295 "enable_ktls": false 00:21:26.295 } 00:21:26.295 } 00:21:26.295 ] 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "subsystem": "vmd", 00:21:26.295 "config": [] 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "subsystem": "accel", 00:21:26.295 "config": [ 00:21:26.295 { 00:21:26.295 "method": "accel_set_options", 00:21:26.295 "params": { 00:21:26.295 "small_cache_size": 128, 00:21:26.295 "large_cache_size": 16, 00:21:26.295 "task_count": 2048, 00:21:26.295 "sequence_count": 2048, 00:21:26.295 "buf_count": 2048 00:21:26.295 } 00:21:26.295 } 00:21:26.295 ] 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "subsystem": "bdev", 00:21:26.295 "config": [ 00:21:26.295 { 00:21:26.295 "method": "bdev_set_options", 00:21:26.295 "params": { 00:21:26.295 "bdev_io_pool_size": 65535, 00:21:26.295 "bdev_io_cache_size": 256, 00:21:26.295 "bdev_auto_examine": true, 00:21:26.295 "iobuf_small_cache_size": 128, 00:21:26.295 "iobuf_large_cache_size": 16 00:21:26.295 } 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "method": "bdev_raid_set_options", 00:21:26.295 "params": { 00:21:26.295 "process_window_size_kb": 1024 00:21:26.295 } 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "method": "bdev_iscsi_set_options", 00:21:26.295 "params": { 00:21:26.295 "timeout_sec": 30 00:21:26.295 } 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "method": "bdev_nvme_set_options", 00:21:26.295 "params": { 00:21:26.295 "action_on_timeout": "none", 00:21:26.295 "timeout_us": 0, 00:21:26.295 "timeout_admin_us": 0, 00:21:26.295 "keep_alive_timeout_ms": 10000, 00:21:26.295 "arbitration_burst": 0, 00:21:26.295 "low_priority_weight": 0, 00:21:26.295 "medium_priority_weight": 0, 00:21:26.295 "high_priority_weight": 0, 00:21:26.295 "nvme_adminq_poll_period_us": 10000, 00:21:26.295 "nvme_ioq_poll_period_us": 0, 00:21:26.295 "io_queue_requests": 512, 00:21:26.295 "delay_cmd_submit": true, 00:21:26.295 "transport_retry_count": 4, 00:21:26.295 "bdev_retry_count": 3, 00:21:26.295 "transport_ack_timeout": 0, 00:21:26.295 "ctrlr_loss_timeout_sec": 0, 00:21:26.295 "reconnect_delay_sec": 0, 00:21:26.295 "fast_io_fail_timeout_sec": 0, 00:21:26.295 "disable_auto_failback": false, 00:21:26.295 "generate_uuids": false, 00:21:26.295 "transport_tos": 0, 00:21:26.295 "nvme_error_stat": false, 00:21:26.295 "rdma_srq_size": 0, 00:21:26.295 "io_path_stat": false, 00:21:26.295 "allow_accel_sequence": false, 00:21:26.295 "rdma_max_cq_size": 0, 00:21:26.295 "rdma_cm_event_timeout_ms": 0, 00:21:26.295 "dhchap_digests": [ 00:21:26.295 "sha256", 00:21:26.295 "sha384", 00:21:26.295 "sha512" 00:21:26.295 ], 00:21:26.295 "dhchap_dhgroups": [ 00:21:26.295 "null", 00:21:26.295 "ffdhe2048", 00:21:26.295 "ffdhe3072", 00:21:26.295 "ffdhe4096", 00:21:26.295 "ffdhe6144", 00:21:26.295 "ffdhe8192" 00:21:26.295 ] 00:21:26.295 } 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "method": "bdev_nvme_attach_controller", 00:21:26.295 "params": { 00:21:26.295 "name": "TLSTEST", 00:21:26.295 "trtype": "TCP", 00:21:26.295 "adrfam": "IPv4", 00:21:26.295 "traddr": "10.0.0.2", 00:21:26.295 "trsvcid": "4420", 00:21:26.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.295 "prchk_reftag": false, 00:21:26.295 "prchk_guard": false, 00:21:26.295 "ctrlr_loss_timeout_sec": 0, 00:21:26.295 "reconnect_delay_sec": 0, 00:21:26.295 "fast_io_fail_timeout_sec": 0, 00:21:26.295 "psk": "/tmp/tmp.dYc23xM5FI", 00:21:26.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.295 "hdgst": false, 00:21:26.295 "ddgst": false 00:21:26.295 } 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "method": "bdev_nvme_set_hotplug", 00:21:26.295 "params": { 00:21:26.295 "period_us": 100000, 00:21:26.295 "enable": false 00:21:26.295 } 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "method": "bdev_wait_for_examine" 00:21:26.295 } 00:21:26.295 ] 00:21:26.295 }, 00:21:26.295 { 00:21:26.295 "subsystem": "nbd", 00:21:26.295 "config": [] 00:21:26.295 } 00:21:26.295 ] 00:21:26.295 }' 00:21:26.295 [2024-07-15 23:57:41.319017] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:26.295 [2024-07-15 23:57:41.319069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid504304 ] 00:21:26.295 [2024-07-15 23:57:41.373762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.295 [2024-07-15 23:57:41.426925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.572 [2024-07-15 23:57:41.551648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.572 [2024-07-15 23:57:41.551708] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:27.142 23:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:27.142 23:57:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:27.142 23:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:27.142 Running I/O for 10 seconds... 00:21:37.138 00:21:37.138 Latency(us) 00:21:37.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.138 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:37.138 Verification LBA range: start 0x0 length 0x2000 00:21:37.138 TLSTESTn1 : 10.02 5573.74 21.77 0.00 0.00 22929.72 5898.24 46530.56 00:21:37.138 =================================================================================================================== 00:21:37.138 Total : 5573.74 21.77 0.00 0.00 22929.72 5898.24 46530.56 00:21:37.138 0 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 504304 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 504304 ']' 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 504304 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 504304 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 504304' 00:21:37.138 killing process with pid 504304 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 504304 00:21:37.138 Received shutdown signal, test time was about 10.000000 seconds 00:21:37.138 00:21:37.138 Latency(us) 00:21:37.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.138 =================================================================================================================== 00:21:37.138 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.138 [2024-07-15 23:57:52.274929] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:37.138 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 504304 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 503969 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 503969 ']' 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 503969 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 503969 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 503969' 00:21:37.399 killing process with pid 503969 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 503969 00:21:37.399 [2024-07-15 23:57:52.442971] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 503969 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=506345 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 506345 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 506345 ']' 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:37.399 23:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.659 [2024-07-15 23:57:52.630805] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:37.659 [2024-07-15 23:57:52.630877] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.659 [2024-07-15 23:57:52.704389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.659 [2024-07-15 23:57:52.770335] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.659 [2024-07-15 23:57:52.770372] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.659 [2024-07-15 23:57:52.770380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.659 [2024-07-15 23:57:52.770386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.659 [2024-07-15 23:57:52.770392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.659 [2024-07-15 23:57:52.770409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.238 23:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:38.238 23:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:38.238 23:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.238 23:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.238 23:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.238 23:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.238 23:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.dYc23xM5FI 00:21:38.238 23:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.dYc23xM5FI 00:21:38.238 23:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:38.497 [2024-07-15 23:57:53.557172] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.497 23:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:38.757 23:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:38.757 [2024-07-15 23:57:53.849892] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.757 [2024-07-15 23:57:53.850094] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.757 23:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:39.017 malloc0 00:21:39.017 23:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:39.017 23:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dYc23xM5FI 00:21:39.277 [2024-07-15 23:57:54.297860] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=506704 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 506704 /var/tmp/bdevperf.sock 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 506704 ']' 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:39.277 23:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.277 [2024-07-15 23:57:54.342456] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:39.277 [2024-07-15 23:57:54.342507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506704 ] 00:21:39.277 [2024-07-15 23:57:54.424908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.537 [2024-07-15 23:57:54.478994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.106 23:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:40.106 23:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:40.106 23:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dYc23xM5FI 00:21:40.106 23:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:40.365 [2024-07-15 23:57:55.389373] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.365 nvme0n1 00:21:40.365 23:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.625 Running I/O for 1 seconds... 00:21:41.568 00:21:41.568 Latency(us) 00:21:41.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.568 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:41.568 Verification LBA range: start 0x0 length 0x2000 00:21:41.568 nvme0n1 : 1.02 5963.97 23.30 0.00 0.00 21274.15 5434.03 70778.88 00:21:41.568 =================================================================================================================== 00:21:41.568 Total : 5963.97 23.30 0.00 0.00 21274.15 5434.03 70778.88 00:21:41.568 0 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 506704 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 506704 ']' 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 506704 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 506704 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 506704' 00:21:41.568 killing process with pid 506704 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 506704 00:21:41.568 Received shutdown signal, test time was about 1.000000 seconds 00:21:41.568 00:21:41.568 Latency(us) 00:21:41.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.568 =================================================================================================================== 00:21:41.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 506704 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 506345 00:21:41.568 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 506345 ']' 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 506345 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 506345 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 506345' 00:21:41.829 killing process with pid 506345 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 506345 00:21:41.829 [2024-07-15 23:57:56.794338] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 506345 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=507376 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 507376 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 507376 ']' 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:41.829 23:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.829 [2024-07-15 23:57:56.991728] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:41.829 [2024-07-15 23:57:56.991789] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.090 [2024-07-15 23:57:57.062864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.090 [2024-07-15 23:57:57.127266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.090 [2024-07-15 23:57:57.127303] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.090 [2024-07-15 23:57:57.127311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.090 [2024-07-15 23:57:57.127318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.090 [2024-07-15 23:57:57.127323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.090 [2024-07-15 23:57:57.127346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.661 [2024-07-15 23:57:57.789972] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.661 malloc0 00:21:42.661 [2024-07-15 23:57:57.816726] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:42.661 [2024-07-15 23:57:57.816925] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=507409 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 507409 /var/tmp/bdevperf.sock 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 507409 ']' 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:42.661 23:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.922 [2024-07-15 23:57:57.893202] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:42.922 [2024-07-15 23:57:57.893262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507409 ] 00:21:42.922 [2024-07-15 23:57:57.951911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.922 [2024-07-15 23:57:58.005973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.922 23:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:42.922 23:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:42.923 23:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dYc23xM5FI 00:21:43.183 23:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:43.183 [2024-07-15 23:57:58.363022] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.444 nvme0n1 00:21:43.444 23:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:43.444 Running I/O for 1 seconds... 00:21:44.385 00:21:44.385 Latency(us) 00:21:44.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.385 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:44.385 Verification LBA range: start 0x0 length 0x2000 00:21:44.385 nvme0n1 : 1.02 4200.40 16.41 0.00 0.00 30212.52 4614.83 59419.31 00:21:44.385 =================================================================================================================== 00:21:44.385 Total : 4200.40 16.41 0.00 0.00 30212.52 4614.83 59419.31 00:21:44.385 0 00:21:44.647 23:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:44.647 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:44.647 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.647 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:44.647 23:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:44.647 "subsystems": [ 00:21:44.647 { 00:21:44.647 "subsystem": "keyring", 00:21:44.647 "config": [ 00:21:44.647 { 00:21:44.647 "method": "keyring_file_add_key", 00:21:44.647 "params": { 00:21:44.647 "name": "key0", 00:21:44.647 "path": "/tmp/tmp.dYc23xM5FI" 00:21:44.647 } 00:21:44.647 } 00:21:44.647 ] 00:21:44.647 }, 00:21:44.647 { 00:21:44.647 "subsystem": "iobuf", 00:21:44.647 "config": [ 00:21:44.647 { 00:21:44.647 "method": "iobuf_set_options", 00:21:44.647 "params": { 00:21:44.647 "small_pool_count": 8192, 00:21:44.647 "large_pool_count": 1024, 00:21:44.647 "small_bufsize": 8192, 00:21:44.647 "large_bufsize": 135168 00:21:44.647 } 00:21:44.647 } 00:21:44.647 ] 00:21:44.647 }, 00:21:44.647 { 00:21:44.647 "subsystem": "sock", 00:21:44.647 "config": [ 00:21:44.647 { 00:21:44.647 "method": "sock_set_default_impl", 00:21:44.647 "params": { 00:21:44.647 "impl_name": "posix" 00:21:44.647 } 00:21:44.647 }, 00:21:44.647 { 00:21:44.647 "method": "sock_impl_set_options", 00:21:44.647 "params": { 00:21:44.647 "impl_name": "ssl", 00:21:44.647 "recv_buf_size": 4096, 00:21:44.647 "send_buf_size": 4096, 00:21:44.647 "enable_recv_pipe": true, 00:21:44.647 "enable_quickack": false, 00:21:44.647 "enable_placement_id": 0, 00:21:44.647 "enable_zerocopy_send_server": true, 00:21:44.647 "enable_zerocopy_send_client": false, 00:21:44.647 "zerocopy_threshold": 0, 00:21:44.647 "tls_version": 0, 00:21:44.647 "enable_ktls": false 00:21:44.647 } 00:21:44.647 }, 00:21:44.647 { 00:21:44.647 "method": "sock_impl_set_options", 00:21:44.647 "params": { 00:21:44.647 "impl_name": "posix", 00:21:44.647 "recv_buf_size": 2097152, 00:21:44.647 "send_buf_size": 2097152, 00:21:44.647 "enable_recv_pipe": true, 00:21:44.647 "enable_quickack": false, 00:21:44.648 "enable_placement_id": 0, 00:21:44.648 "enable_zerocopy_send_server": true, 00:21:44.648 "enable_zerocopy_send_client": false, 00:21:44.648 "zerocopy_threshold": 0, 00:21:44.648 "tls_version": 0, 00:21:44.648 "enable_ktls": false 00:21:44.648 } 00:21:44.648 } 00:21:44.648 ] 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "subsystem": "vmd", 00:21:44.648 "config": [] 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "subsystem": "accel", 00:21:44.648 "config": [ 00:21:44.648 { 00:21:44.648 "method": "accel_set_options", 00:21:44.648 "params": { 00:21:44.648 "small_cache_size": 128, 00:21:44.648 "large_cache_size": 16, 00:21:44.648 "task_count": 2048, 00:21:44.648 "sequence_count": 2048, 00:21:44.648 "buf_count": 2048 00:21:44.648 } 00:21:44.648 } 00:21:44.648 ] 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "subsystem": "bdev", 00:21:44.648 "config": [ 00:21:44.648 { 00:21:44.648 "method": "bdev_set_options", 00:21:44.648 "params": { 00:21:44.648 "bdev_io_pool_size": 65535, 00:21:44.648 "bdev_io_cache_size": 256, 00:21:44.648 "bdev_auto_examine": true, 00:21:44.648 "iobuf_small_cache_size": 128, 00:21:44.648 "iobuf_large_cache_size": 16 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "bdev_raid_set_options", 00:21:44.648 "params": { 00:21:44.648 "process_window_size_kb": 1024 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "bdev_iscsi_set_options", 00:21:44.648 "params": { 00:21:44.648 "timeout_sec": 30 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "bdev_nvme_set_options", 00:21:44.648 "params": { 00:21:44.648 "action_on_timeout": "none", 00:21:44.648 "timeout_us": 0, 00:21:44.648 "timeout_admin_us": 0, 00:21:44.648 "keep_alive_timeout_ms": 10000, 00:21:44.648 "arbitration_burst": 0, 00:21:44.648 "low_priority_weight": 0, 00:21:44.648 "medium_priority_weight": 0, 00:21:44.648 "high_priority_weight": 0, 00:21:44.648 "nvme_adminq_poll_period_us": 10000, 00:21:44.648 "nvme_ioq_poll_period_us": 0, 00:21:44.648 "io_queue_requests": 0, 00:21:44.648 "delay_cmd_submit": true, 00:21:44.648 "transport_retry_count": 4, 00:21:44.648 "bdev_retry_count": 3, 00:21:44.648 "transport_ack_timeout": 0, 00:21:44.648 "ctrlr_loss_timeout_sec": 0, 00:21:44.648 "reconnect_delay_sec": 0, 00:21:44.648 "fast_io_fail_timeout_sec": 0, 00:21:44.648 "disable_auto_failback": false, 00:21:44.648 "generate_uuids": false, 00:21:44.648 "transport_tos": 0, 00:21:44.648 "nvme_error_stat": false, 00:21:44.648 "rdma_srq_size": 0, 00:21:44.648 "io_path_stat": false, 00:21:44.648 "allow_accel_sequence": false, 00:21:44.648 "rdma_max_cq_size": 0, 00:21:44.648 "rdma_cm_event_timeout_ms": 0, 00:21:44.648 "dhchap_digests": [ 00:21:44.648 "sha256", 00:21:44.648 "sha384", 00:21:44.648 "sha512" 00:21:44.648 ], 00:21:44.648 "dhchap_dhgroups": [ 00:21:44.648 "null", 00:21:44.648 "ffdhe2048", 00:21:44.648 "ffdhe3072", 00:21:44.648 "ffdhe4096", 00:21:44.648 "ffdhe6144", 00:21:44.648 "ffdhe8192" 00:21:44.648 ] 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "bdev_nvme_set_hotplug", 00:21:44.648 "params": { 00:21:44.648 "period_us": 100000, 00:21:44.648 "enable": false 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "bdev_malloc_create", 00:21:44.648 "params": { 00:21:44.648 "name": "malloc0", 00:21:44.648 "num_blocks": 8192, 00:21:44.648 "block_size": 4096, 00:21:44.648 "physical_block_size": 4096, 00:21:44.648 "uuid": "681db383-7f00-4d6e-9e70-c9f85465dd5d", 00:21:44.648 "optimal_io_boundary": 0 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "bdev_wait_for_examine" 00:21:44.648 } 00:21:44.648 ] 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "subsystem": "nbd", 00:21:44.648 "config": [] 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "subsystem": "scheduler", 00:21:44.648 "config": [ 00:21:44.648 { 00:21:44.648 "method": "framework_set_scheduler", 00:21:44.648 "params": { 00:21:44.648 "name": "static" 00:21:44.648 } 00:21:44.648 } 00:21:44.648 ] 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "subsystem": "nvmf", 00:21:44.648 "config": [ 00:21:44.648 { 00:21:44.648 "method": "nvmf_set_config", 00:21:44.648 "params": { 00:21:44.648 "discovery_filter": "match_any", 00:21:44.648 "admin_cmd_passthru": { 00:21:44.648 "identify_ctrlr": false 00:21:44.648 } 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "nvmf_set_max_subsystems", 00:21:44.648 "params": { 00:21:44.648 "max_subsystems": 1024 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "nvmf_set_crdt", 00:21:44.648 "params": { 00:21:44.648 "crdt1": 0, 00:21:44.648 "crdt2": 0, 00:21:44.648 "crdt3": 0 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "nvmf_create_transport", 00:21:44.648 "params": { 00:21:44.648 "trtype": "TCP", 00:21:44.648 "max_queue_depth": 128, 00:21:44.648 "max_io_qpairs_per_ctrlr": 127, 00:21:44.648 "in_capsule_data_size": 4096, 00:21:44.648 "max_io_size": 131072, 00:21:44.648 "io_unit_size": 131072, 00:21:44.648 "max_aq_depth": 128, 00:21:44.648 "num_shared_buffers": 511, 00:21:44.648 "buf_cache_size": 4294967295, 00:21:44.648 "dif_insert_or_strip": false, 00:21:44.648 "zcopy": false, 00:21:44.648 "c2h_success": false, 00:21:44.648 "sock_priority": 0, 00:21:44.648 "abort_timeout_sec": 1, 00:21:44.648 "ack_timeout": 0, 00:21:44.648 "data_wr_pool_size": 0 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "nvmf_create_subsystem", 00:21:44.648 "params": { 00:21:44.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.648 "allow_any_host": false, 00:21:44.648 "serial_number": "00000000000000000000", 00:21:44.648 "model_number": "SPDK bdev Controller", 00:21:44.648 "max_namespaces": 32, 00:21:44.648 "min_cntlid": 1, 00:21:44.648 "max_cntlid": 65519, 00:21:44.648 "ana_reporting": false 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "nvmf_subsystem_add_host", 00:21:44.648 "params": { 00:21:44.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.648 "host": "nqn.2016-06.io.spdk:host1", 00:21:44.648 "psk": "key0" 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "nvmf_subsystem_add_ns", 00:21:44.648 "params": { 00:21:44.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.648 "namespace": { 00:21:44.648 "nsid": 1, 00:21:44.648 "bdev_name": "malloc0", 00:21:44.648 "nguid": "681DB3837F004D6E9E70C9F85465DD5D", 00:21:44.648 "uuid": "681db383-7f00-4d6e-9e70-c9f85465dd5d", 00:21:44.648 "no_auto_visible": false 00:21:44.648 } 00:21:44.648 } 00:21:44.648 }, 00:21:44.648 { 00:21:44.648 "method": "nvmf_subsystem_add_listener", 00:21:44.648 "params": { 00:21:44.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.648 "listen_address": { 00:21:44.648 "trtype": "TCP", 00:21:44.648 "adrfam": "IPv4", 00:21:44.648 "traddr": "10.0.0.2", 00:21:44.648 "trsvcid": "4420" 00:21:44.648 }, 00:21:44.648 "secure_channel": false, 00:21:44.648 "sock_impl": "ssl" 00:21:44.648 } 00:21:44.648 } 00:21:44.648 ] 00:21:44.648 } 00:21:44.648 ] 00:21:44.648 }' 00:21:44.648 23:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:44.910 23:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:44.910 "subsystems": [ 00:21:44.910 { 00:21:44.910 "subsystem": "keyring", 00:21:44.910 "config": [ 00:21:44.910 { 00:21:44.910 "method": "keyring_file_add_key", 00:21:44.910 "params": { 00:21:44.910 "name": "key0", 00:21:44.910 "path": "/tmp/tmp.dYc23xM5FI" 00:21:44.910 } 00:21:44.910 } 00:21:44.910 ] 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "subsystem": "iobuf", 00:21:44.910 "config": [ 00:21:44.910 { 00:21:44.910 "method": "iobuf_set_options", 00:21:44.910 "params": { 00:21:44.910 "small_pool_count": 8192, 00:21:44.910 "large_pool_count": 1024, 00:21:44.910 "small_bufsize": 8192, 00:21:44.910 "large_bufsize": 135168 00:21:44.910 } 00:21:44.910 } 00:21:44.910 ] 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "subsystem": "sock", 00:21:44.910 "config": [ 00:21:44.910 { 00:21:44.910 "method": "sock_set_default_impl", 00:21:44.910 "params": { 00:21:44.910 "impl_name": "posix" 00:21:44.910 } 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "method": "sock_impl_set_options", 00:21:44.910 "params": { 00:21:44.910 "impl_name": "ssl", 00:21:44.910 "recv_buf_size": 4096, 00:21:44.910 "send_buf_size": 4096, 00:21:44.910 "enable_recv_pipe": true, 00:21:44.910 "enable_quickack": false, 00:21:44.910 "enable_placement_id": 0, 00:21:44.910 "enable_zerocopy_send_server": true, 00:21:44.910 "enable_zerocopy_send_client": false, 00:21:44.910 "zerocopy_threshold": 0, 00:21:44.910 "tls_version": 0, 00:21:44.910 "enable_ktls": false 00:21:44.910 } 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "method": "sock_impl_set_options", 00:21:44.910 "params": { 00:21:44.910 "impl_name": "posix", 00:21:44.910 "recv_buf_size": 2097152, 00:21:44.910 "send_buf_size": 2097152, 00:21:44.910 "enable_recv_pipe": true, 00:21:44.910 "enable_quickack": false, 00:21:44.910 "enable_placement_id": 0, 00:21:44.910 "enable_zerocopy_send_server": true, 00:21:44.910 "enable_zerocopy_send_client": false, 00:21:44.910 "zerocopy_threshold": 0, 00:21:44.910 "tls_version": 0, 00:21:44.910 "enable_ktls": false 00:21:44.910 } 00:21:44.910 } 00:21:44.910 ] 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "subsystem": "vmd", 00:21:44.910 "config": [] 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "subsystem": "accel", 00:21:44.910 "config": [ 00:21:44.910 { 00:21:44.910 "method": "accel_set_options", 00:21:44.910 "params": { 00:21:44.910 "small_cache_size": 128, 00:21:44.910 "large_cache_size": 16, 00:21:44.910 "task_count": 2048, 00:21:44.910 "sequence_count": 2048, 00:21:44.910 "buf_count": 2048 00:21:44.910 } 00:21:44.910 } 00:21:44.910 ] 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "subsystem": "bdev", 00:21:44.910 "config": [ 00:21:44.910 { 00:21:44.910 "method": "bdev_set_options", 00:21:44.910 "params": { 00:21:44.910 "bdev_io_pool_size": 65535, 00:21:44.910 "bdev_io_cache_size": 256, 00:21:44.910 "bdev_auto_examine": true, 00:21:44.910 "iobuf_small_cache_size": 128, 00:21:44.910 "iobuf_large_cache_size": 16 00:21:44.910 } 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "method": "bdev_raid_set_options", 00:21:44.910 "params": { 00:21:44.910 "process_window_size_kb": 1024 00:21:44.910 } 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "method": "bdev_iscsi_set_options", 00:21:44.910 "params": { 00:21:44.910 "timeout_sec": 30 00:21:44.910 } 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "method": "bdev_nvme_set_options", 00:21:44.910 "params": { 00:21:44.910 "action_on_timeout": "none", 00:21:44.910 "timeout_us": 0, 00:21:44.910 "timeout_admin_us": 0, 00:21:44.910 "keep_alive_timeout_ms": 10000, 00:21:44.910 "arbitration_burst": 0, 00:21:44.910 "low_priority_weight": 0, 00:21:44.910 "medium_priority_weight": 0, 00:21:44.910 "high_priority_weight": 0, 00:21:44.910 "nvme_adminq_poll_period_us": 10000, 00:21:44.910 "nvme_ioq_poll_period_us": 0, 00:21:44.910 "io_queue_requests": 512, 00:21:44.910 "delay_cmd_submit": true, 00:21:44.910 "transport_retry_count": 4, 00:21:44.910 "bdev_retry_count": 3, 00:21:44.910 "transport_ack_timeout": 0, 00:21:44.910 "ctrlr_loss_timeout_sec": 0, 00:21:44.910 "reconnect_delay_sec": 0, 00:21:44.910 "fast_io_fail_timeout_sec": 0, 00:21:44.910 "disable_auto_failback": false, 00:21:44.910 "generate_uuids": false, 00:21:44.910 "transport_tos": 0, 00:21:44.910 "nvme_error_stat": false, 00:21:44.910 "rdma_srq_size": 0, 00:21:44.910 "io_path_stat": false, 00:21:44.910 "allow_accel_sequence": false, 00:21:44.910 "rdma_max_cq_size": 0, 00:21:44.910 "rdma_cm_event_timeout_ms": 0, 00:21:44.910 "dhchap_digests": [ 00:21:44.910 "sha256", 00:21:44.910 "sha384", 00:21:44.910 "sha512" 00:21:44.910 ], 00:21:44.910 "dhchap_dhgroups": [ 00:21:44.910 "null", 00:21:44.910 "ffdhe2048", 00:21:44.910 "ffdhe3072", 00:21:44.910 "ffdhe4096", 00:21:44.910 "ffdhe6144", 00:21:44.910 "ffdhe8192" 00:21:44.910 ] 00:21:44.910 } 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "method": "bdev_nvme_attach_controller", 00:21:44.910 "params": { 00:21:44.910 "name": "nvme0", 00:21:44.910 "trtype": "TCP", 00:21:44.910 "adrfam": "IPv4", 00:21:44.910 "traddr": "10.0.0.2", 00:21:44.910 "trsvcid": "4420", 00:21:44.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.910 "prchk_reftag": false, 00:21:44.910 "prchk_guard": false, 00:21:44.910 "ctrlr_loss_timeout_sec": 0, 00:21:44.910 "reconnect_delay_sec": 0, 00:21:44.910 "fast_io_fail_timeout_sec": 0, 00:21:44.910 "psk": "key0", 00:21:44.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.910 "hdgst": false, 00:21:44.910 "ddgst": false 00:21:44.910 } 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "method": "bdev_nvme_set_hotplug", 00:21:44.911 "params": { 00:21:44.911 "period_us": 100000, 00:21:44.911 "enable": false 00:21:44.911 } 00:21:44.911 }, 00:21:44.911 { 00:21:44.911 "method": "bdev_enable_histogram", 00:21:44.911 "params": { 00:21:44.911 "name": "nvme0n1", 00:21:44.911 "enable": true 00:21:44.911 } 00:21:44.911 }, 00:21:44.911 { 00:21:44.911 "method": "bdev_wait_for_examine" 00:21:44.911 } 00:21:44.911 ] 00:21:44.911 }, 00:21:44.911 { 00:21:44.911 "subsystem": "nbd", 00:21:44.911 "config": [] 00:21:44.911 } 00:21:44.911 ] 00:21:44.911 }' 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 507409 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 507409 ']' 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 507409 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 507409 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 507409' 00:21:44.911 killing process with pid 507409 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 507409 00:21:44.911 Received shutdown signal, test time was about 1.000000 seconds 00:21:44.911 00:21:44.911 Latency(us) 00:21:44.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.911 =================================================================================================================== 00:21:44.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.911 23:57:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 507409 00:21:44.911 23:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 507376 00:21:44.911 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 507376 ']' 00:21:44.911 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 507376 00:21:44.911 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:44.911 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:44.911 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 507376 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 507376' 00:21:45.172 killing process with pid 507376 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 507376 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 507376 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.172 23:58:00 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:45.172 "subsystems": [ 00:21:45.172 { 00:21:45.172 "subsystem": "keyring", 00:21:45.172 "config": [ 00:21:45.172 { 00:21:45.172 "method": "keyring_file_add_key", 00:21:45.172 "params": { 00:21:45.172 "name": "key0", 00:21:45.172 "path": "/tmp/tmp.dYc23xM5FI" 00:21:45.172 } 00:21:45.172 } 00:21:45.172 ] 00:21:45.172 }, 00:21:45.172 { 00:21:45.172 "subsystem": "iobuf", 00:21:45.172 "config": [ 00:21:45.172 { 00:21:45.172 "method": "iobuf_set_options", 00:21:45.172 "params": { 00:21:45.172 "small_pool_count": 8192, 00:21:45.172 "large_pool_count": 1024, 00:21:45.172 "small_bufsize": 8192, 00:21:45.172 "large_bufsize": 135168 00:21:45.172 } 00:21:45.172 } 00:21:45.172 ] 00:21:45.172 }, 00:21:45.172 { 00:21:45.172 "subsystem": "sock", 00:21:45.172 "config": [ 00:21:45.172 { 00:21:45.172 "method": "sock_set_default_impl", 00:21:45.172 "params": { 00:21:45.172 "impl_name": "posix" 00:21:45.172 } 00:21:45.172 }, 00:21:45.172 { 00:21:45.172 "method": "sock_impl_set_options", 00:21:45.172 "params": { 00:21:45.172 "impl_name": "ssl", 00:21:45.172 "recv_buf_size": 4096, 00:21:45.172 "send_buf_size": 4096, 00:21:45.172 "enable_recv_pipe": true, 00:21:45.172 "enable_quickack": false, 00:21:45.172 "enable_placement_id": 0, 00:21:45.172 "enable_zerocopy_send_server": true, 00:21:45.172 "enable_zerocopy_send_client": false, 00:21:45.172 "zerocopy_threshold": 0, 00:21:45.172 "tls_version": 0, 00:21:45.172 "enable_ktls": false 00:21:45.172 } 00:21:45.172 }, 00:21:45.172 { 00:21:45.172 "method": "sock_impl_set_options", 00:21:45.172 "params": { 00:21:45.172 "impl_name": "posix", 00:21:45.172 "recv_buf_size": 2097152, 00:21:45.172 "send_buf_size": 2097152, 00:21:45.172 "enable_recv_pipe": true, 00:21:45.172 "enable_quickack": false, 00:21:45.172 "enable_placement_id": 0, 00:21:45.172 "enable_zerocopy_send_server": true, 00:21:45.172 "enable_zerocopy_send_client": false, 00:21:45.172 "zerocopy_threshold": 0, 00:21:45.172 "tls_version": 0, 00:21:45.172 "enable_ktls": false 00:21:45.172 } 00:21:45.172 } 00:21:45.172 ] 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "subsystem": "vmd", 00:21:45.173 "config": [] 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "subsystem": "accel", 00:21:45.173 "config": [ 00:21:45.173 { 00:21:45.173 "method": "accel_set_options", 00:21:45.173 "params": { 00:21:45.173 "small_cache_size": 128, 00:21:45.173 "large_cache_size": 16, 00:21:45.173 "task_count": 2048, 00:21:45.173 "sequence_count": 2048, 00:21:45.173 "buf_count": 2048 00:21:45.173 } 00:21:45.173 } 00:21:45.173 ] 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "subsystem": "bdev", 00:21:45.173 "config": [ 00:21:45.173 { 00:21:45.173 "method": "bdev_set_options", 00:21:45.173 "params": { 00:21:45.173 "bdev_io_pool_size": 65535, 00:21:45.173 "bdev_io_cache_size": 256, 00:21:45.173 "bdev_auto_examine": true, 00:21:45.173 "iobuf_small_cache_size": 128, 00:21:45.173 "iobuf_large_cache_size": 16 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "bdev_raid_set_options", 00:21:45.173 "params": { 00:21:45.173 "process_window_size_kb": 1024 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "bdev_iscsi_set_options", 00:21:45.173 "params": { 00:21:45.173 "timeout_sec": 30 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "bdev_nvme_set_options", 00:21:45.173 "params": { 00:21:45.173 "action_on_timeout": "none", 00:21:45.173 "timeout_us": 0, 00:21:45.173 "timeout_admin_us": 0, 00:21:45.173 "keep_alive_timeout_ms": 10000, 00:21:45.173 "arbitration_burst": 0, 00:21:45.173 "low_priority_weight": 0, 00:21:45.173 "medium_priority_weight": 0, 00:21:45.173 "high_priority_weight": 0, 00:21:45.173 "nvme_adminq_poll_period_us": 10000, 00:21:45.173 "nvme_ioq_poll_period_us": 0, 00:21:45.173 "io_queue_requests": 0, 00:21:45.173 "delay_cmd_submit": true, 00:21:45.173 "transport_retry_count": 4, 00:21:45.173 "bdev_retry_count": 3, 00:21:45.173 "transport_ack_timeout": 0, 00:21:45.173 "ctrlr_loss_timeout_sec": 0, 00:21:45.173 "reconnect_delay_sec": 0, 00:21:45.173 "fast_io_fail_timeout_sec": 0, 00:21:45.173 "disable_auto_failback": false, 00:21:45.173 "generate_uuids": false, 00:21:45.173 "transport_tos": 0, 00:21:45.173 "nvme_error_stat": false, 00:21:45.173 "rdma_srq_size": 0, 00:21:45.173 "io_path_stat": false, 00:21:45.173 "allow_accel_sequence": false, 00:21:45.173 "rdma_max_cq_size": 0, 00:21:45.173 "rdma_cm_event_timeout_ms": 0, 00:21:45.173 "dhchap_digests": [ 00:21:45.173 "sha256", 00:21:45.173 "sha384", 00:21:45.173 "sha512" 00:21:45.173 ], 00:21:45.173 "dhchap_dhgroups": [ 00:21:45.173 "null", 00:21:45.173 "ffdhe2048", 00:21:45.173 "ffdhe3072", 00:21:45.173 "ffdhe4096", 00:21:45.173 "ffdhe6144", 00:21:45.173 "ffdhe8192" 00:21:45.173 ] 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "bdev_nvme_set_hotplug", 00:21:45.173 "params": { 00:21:45.173 "period_us": 100000, 00:21:45.173 "enable": false 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "bdev_malloc_create", 00:21:45.173 "params": { 00:21:45.173 "name": "malloc0", 00:21:45.173 "num_blocks": 8192, 00:21:45.173 "block_size": 4096, 00:21:45.173 "physical_block_size": 4096, 00:21:45.173 "uuid": "681db383-7f00-4d6e-9e70-c9f85465dd5d", 00:21:45.173 "optimal_io_boundary": 0 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "bdev_wait_for_examine" 00:21:45.173 } 00:21:45.173 ] 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "subsystem": "nbd", 00:21:45.173 "config": [] 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "subsystem": "scheduler", 00:21:45.173 "config": [ 00:21:45.173 { 00:21:45.173 "method": "framework_set_scheduler", 00:21:45.173 "params": { 00:21:45.173 "name": "static" 00:21:45.173 } 00:21:45.173 } 00:21:45.173 ] 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "subsystem": "nvmf", 00:21:45.173 "config": [ 00:21:45.173 { 00:21:45.173 "method": "nvmf_set_config", 00:21:45.173 "params": { 00:21:45.173 "discovery_filter": "match_any", 00:21:45.173 "admin_cmd_passthru": { 00:21:45.173 "identify_ctrlr": false 00:21:45.173 } 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "nvmf_set_max_subsystems", 00:21:45.173 "params": { 00:21:45.173 "max_subsystems": 1024 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "nvmf_set_crdt", 00:21:45.173 "params": { 00:21:45.173 "crdt1": 0, 00:21:45.173 "crdt2": 0, 00:21:45.173 "crdt3": 0 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "nvmf_create_transport", 00:21:45.173 "params": { 00:21:45.173 "trtype": "TCP", 00:21:45.173 "max_queue_depth": 128, 00:21:45.173 "max_io_qpairs_per_ctrlr": 127, 00:21:45.173 "in_capsule_data_size": 4096, 00:21:45.173 "max_io_size": 131072, 00:21:45.173 "io_unit_size": 131072, 00:21:45.173 "max_aq_depth": 128, 00:21:45.173 "num_shared_buffers": 511, 00:21:45.173 "buf_cache_size": 4294967295, 00:21:45.173 "dif_insert_or_strip": false, 00:21:45.173 "zcopy": false, 00:21:45.173 "c2h_success": false, 00:21:45.173 "sock_priority": 0, 00:21:45.173 "abort_timeout_sec": 1, 00:21:45.173 "ack_timeout": 0, 00:21:45.173 "data_wr_pool_size": 0 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "nvmf_create_subsystem", 00:21:45.173 "params": { 00:21:45.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.173 "allow_any_host": false, 00:21:45.173 "serial_number": "00000000000000000000", 00:21:45.173 "model_number": "SPDK bdev Controller", 00:21:45.173 "max_namespaces": 32, 00:21:45.173 "min_cntlid": 1, 00:21:45.173 "max_cntlid": 65519, 00:21:45.173 "ana_reporting": false 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "nvmf_subsystem_add_host", 00:21:45.173 "params": { 00:21:45.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.173 "host": "nqn.2016-06.io.spdk:host1", 00:21:45.173 "psk": "key0" 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "nvmf_subsystem_add_ns", 00:21:45.173 "params": { 00:21:45.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.173 "namespace": { 00:21:45.173 "nsid": 1, 00:21:45.173 "bdev_name": "malloc0", 00:21:45.173 "nguid": "681DB3837F004D6E9E70C9F85465DD5D", 00:21:45.173 "uuid": "681db383-7f00-4d6e-9e70-c9f85465dd5d", 00:21:45.173 "no_auto_visible": false 00:21:45.173 } 00:21:45.173 } 00:21:45.173 }, 00:21:45.173 { 00:21:45.173 "method": "nvmf_subsystem_add_listener", 00:21:45.173 "params": { 00:21:45.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.173 "listen_address": { 00:21:45.173 "trtype": "TCP", 00:21:45.173 "adrfam": "IPv4", 00:21:45.173 "traddr": "10.0.0.2", 00:21:45.173 "trsvcid": "4420" 00:21:45.173 }, 00:21:45.173 "secure_channel": false, 00:21:45.173 "sock_impl": "ssl" 00:21:45.173 } 00:21:45.173 } 00:21:45.173 ] 00:21:45.173 } 00:21:45.173 ] 00:21:45.173 }' 00:21:45.173 23:58:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=508080 00:21:45.173 23:58:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 508080 00:21:45.173 23:58:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:45.173 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 508080 ']' 00:21:45.173 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.173 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:45.173 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.173 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:45.173 23:58:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.173 [2024-07-15 23:58:00.328157] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:45.173 [2024-07-15 23:58:00.328209] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.435 [2024-07-15 23:58:00.400548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.435 [2024-07-15 23:58:00.462745] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.435 [2024-07-15 23:58:00.462784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.435 [2024-07-15 23:58:00.462792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.435 [2024-07-15 23:58:00.462798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.435 [2024-07-15 23:58:00.462804] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.435 [2024-07-15 23:58:00.462860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.695 [2024-07-15 23:58:00.660352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.695 [2024-07-15 23:58:00.692353] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.695 [2024-07-15 23:58:00.704425] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.957 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:45.957 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:45.957 23:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.957 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:45.957 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.957 23:58:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=508111 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 508111 /var/tmp/bdevperf.sock 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 508111 ']' 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.218 23:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:46.218 "subsystems": [ 00:21:46.218 { 00:21:46.218 "subsystem": "keyring", 00:21:46.218 "config": [ 00:21:46.218 { 00:21:46.218 "method": "keyring_file_add_key", 00:21:46.218 "params": { 00:21:46.218 "name": "key0", 00:21:46.218 "path": "/tmp/tmp.dYc23xM5FI" 00:21:46.218 } 00:21:46.218 } 00:21:46.218 ] 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "subsystem": "iobuf", 00:21:46.218 "config": [ 00:21:46.218 { 00:21:46.218 "method": "iobuf_set_options", 00:21:46.218 "params": { 00:21:46.218 "small_pool_count": 8192, 00:21:46.218 "large_pool_count": 1024, 00:21:46.218 "small_bufsize": 8192, 00:21:46.218 "large_bufsize": 135168 00:21:46.218 } 00:21:46.218 } 00:21:46.218 ] 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "subsystem": "sock", 00:21:46.218 "config": [ 00:21:46.218 { 00:21:46.218 "method": "sock_set_default_impl", 00:21:46.218 "params": { 00:21:46.218 "impl_name": "posix" 00:21:46.218 } 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "method": "sock_impl_set_options", 00:21:46.218 "params": { 00:21:46.218 "impl_name": "ssl", 00:21:46.218 "recv_buf_size": 4096, 00:21:46.218 "send_buf_size": 4096, 00:21:46.218 "enable_recv_pipe": true, 00:21:46.218 "enable_quickack": false, 00:21:46.218 "enable_placement_id": 0, 00:21:46.218 "enable_zerocopy_send_server": true, 00:21:46.218 "enable_zerocopy_send_client": false, 00:21:46.218 "zerocopy_threshold": 0, 00:21:46.218 "tls_version": 0, 00:21:46.218 "enable_ktls": false 00:21:46.218 } 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "method": "sock_impl_set_options", 00:21:46.218 "params": { 00:21:46.218 "impl_name": "posix", 00:21:46.218 "recv_buf_size": 2097152, 00:21:46.218 "send_buf_size": 2097152, 00:21:46.218 "enable_recv_pipe": true, 00:21:46.218 "enable_quickack": false, 00:21:46.218 "enable_placement_id": 0, 00:21:46.218 "enable_zerocopy_send_server": true, 00:21:46.218 "enable_zerocopy_send_client": false, 00:21:46.218 "zerocopy_threshold": 0, 00:21:46.218 "tls_version": 0, 00:21:46.218 "enable_ktls": false 00:21:46.218 } 00:21:46.218 } 00:21:46.218 ] 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "subsystem": "vmd", 00:21:46.218 "config": [] 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "subsystem": "accel", 00:21:46.218 "config": [ 00:21:46.218 { 00:21:46.218 "method": "accel_set_options", 00:21:46.218 "params": { 00:21:46.218 "small_cache_size": 128, 00:21:46.218 "large_cache_size": 16, 00:21:46.218 "task_count": 2048, 00:21:46.218 "sequence_count": 2048, 00:21:46.218 "buf_count": 2048 00:21:46.218 } 00:21:46.218 } 00:21:46.218 ] 00:21:46.218 }, 00:21:46.218 { 00:21:46.218 "subsystem": "bdev", 00:21:46.218 "config": [ 00:21:46.218 { 00:21:46.218 "method": "bdev_set_options", 00:21:46.218 "params": { 00:21:46.218 "bdev_io_pool_size": 65535, 00:21:46.218 "bdev_io_cache_size": 256, 00:21:46.218 "bdev_auto_examine": true, 00:21:46.218 "iobuf_small_cache_size": 128, 00:21:46.218 "iobuf_large_cache_size": 16 00:21:46.218 } 00:21:46.218 }, 00:21:46.218 { 00:21:46.219 "method": "bdev_raid_set_options", 00:21:46.219 "params": { 00:21:46.219 "process_window_size_kb": 1024 00:21:46.219 } 00:21:46.219 }, 00:21:46.219 { 00:21:46.219 "method": "bdev_iscsi_set_options", 00:21:46.219 "params": { 00:21:46.219 "timeout_sec": 30 00:21:46.219 } 00:21:46.219 }, 00:21:46.219 { 00:21:46.219 "method": "bdev_nvme_set_options", 00:21:46.219 "params": { 00:21:46.219 "action_on_timeout": "none", 00:21:46.219 "timeout_us": 0, 00:21:46.219 "timeout_admin_us": 0, 00:21:46.219 "keep_alive_timeout_ms": 10000, 00:21:46.219 "arbitration_burst": 0, 00:21:46.219 "low_priority_weight": 0, 00:21:46.219 "medium_priority_weight": 0, 00:21:46.219 "high_priority_weight": 0, 00:21:46.219 "nvme_adminq_poll_period_us": 10000, 00:21:46.219 "nvme_ioq_poll_period_us": 0, 00:21:46.219 "io_queue_requests": 512, 00:21:46.219 "delay_cmd_submit": true, 00:21:46.219 "transport_retry_count": 4, 00:21:46.219 "bdev_retry_count": 3, 00:21:46.219 "transport_ack_timeout": 0, 00:21:46.219 "ctrlr_loss_timeout_sec": 0, 00:21:46.219 "reconnect_delay_sec": 0, 00:21:46.219 "fast_io_fail_timeout_sec": 0, 00:21:46.219 "disable_auto_failback": false, 00:21:46.219 "generate_uuids": false, 00:21:46.219 "transport_tos": 0, 00:21:46.219 "nvme_error_stat": false, 00:21:46.219 "rdma_srq_size": 0, 00:21:46.219 "io_path_stat": false, 00:21:46.219 "allow_accel_sequence": false, 00:21:46.219 "rdma_max_cq_size": 0, 00:21:46.219 "rdma_cm_event_timeout_ms": 0, 00:21:46.219 "dhchap_digests": [ 00:21:46.219 "sha256", 00:21:46.219 "sha384", 00:21:46.219 "sha512" 00:21:46.219 ], 00:21:46.219 "dhchap_dhgroups": [ 00:21:46.219 "null", 00:21:46.219 "ffdhe2048", 00:21:46.219 "ffdhe3072", 00:21:46.219 "ffdhe4096", 00:21:46.219 "ffdhe6144", 00:21:46.219 "ffdhe8192" 00:21:46.219 ] 00:21:46.219 } 00:21:46.219 }, 00:21:46.219 { 00:21:46.219 "method": "bdev_nvme_attach_controller", 00:21:46.219 "params": { 00:21:46.219 "name": "nvme0", 00:21:46.219 "trtype": "TCP", 00:21:46.219 "adrfam": "IPv4", 00:21:46.219 "traddr": "10.0.0.2", 00:21:46.219 "trsvcid": "4420", 00:21:46.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.219 "prchk_reftag": false, 00:21:46.219 "prchk_guard": false, 00:21:46.219 "ctrlr_loss_timeout_sec": 0, 00:21:46.219 "reconnect_delay_sec": 0, 00:21:46.219 "fast_io_fail_timeout_sec": 0, 00:21:46.219 "psk": "key0", 00:21:46.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.219 "hdgst": false, 00:21:46.219 "ddgst": false 00:21:46.219 } 00:21:46.219 }, 00:21:46.219 { 00:21:46.219 "method": "bdev_nvme_set_hotplug", 00:21:46.219 "params": { 00:21:46.219 "period_us": 100000, 00:21:46.219 "enable": false 00:21:46.219 } 00:21:46.219 }, 00:21:46.219 { 00:21:46.219 "method": "bdev_enable_histogram", 00:21:46.219 "params": { 00:21:46.219 "name": "nvme0n1", 00:21:46.219 "enable": true 00:21:46.219 } 00:21:46.219 }, 00:21:46.219 { 00:21:46.219 "method": "bdev_wait_for_examine" 00:21:46.219 } 00:21:46.219 ] 00:21:46.219 }, 00:21:46.219 { 00:21:46.219 "subsystem": "nbd", 00:21:46.219 "config": [] 00:21:46.219 } 00:21:46.219 ] 00:21:46.219 }' 00:21:46.219 [2024-07-15 23:58:01.194001] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:46.219 [2024-07-15 23:58:01.194053] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid508111 ] 00:21:46.219 [2024-07-15 23:58:01.275965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.219 [2024-07-15 23:58:01.331152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.479 [2024-07-15 23:58:01.464943] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.052 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:47.052 23:58:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:21:47.052 23:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.052 23:58:01 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:47.052 23:58:02 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.052 23:58:02 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:47.052 Running I/O for 1 seconds... 00:21:48.439 00:21:48.439 Latency(us) 00:21:48.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.439 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:48.439 Verification LBA range: start 0x0 length 0x2000 00:21:48.439 nvme0n1 : 1.05 3832.87 14.97 0.00 0.00 32725.41 4587.52 88692.05 00:21:48.439 =================================================================================================================== 00:21:48.439 Total : 3832.87 14.97 0.00 0.00 32725.41 4587.52 88692.05 00:21:48.439 0 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@800 -- # type=--id 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@801 -- # id=0 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # for n in $shm_files 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:48.439 nvmf_trace.0 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # return 0 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 508111 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 508111 ']' 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 508111 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 508111 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 508111' 00:21:48.439 killing process with pid 508111 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 508111 00:21:48.439 Received shutdown signal, test time was about 1.000000 seconds 00:21:48.439 00:21:48.439 Latency(us) 00:21:48.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.439 =================================================================================================================== 00:21:48.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 508111 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.439 rmmod nvme_tcp 00:21:48.439 rmmod nvme_fabrics 00:21:48.439 rmmod nvme_keyring 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 508080 ']' 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 508080 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 508080 ']' 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 508080 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:48.439 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 508080 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 508080' 00:21:48.700 killing process with pid 508080 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 508080 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 508080 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.700 23:58:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.247 23:58:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.247 23:58:05 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.omXxutgYnr /tmp/tmp.iEavPzbqjC /tmp/tmp.dYc23xM5FI 00:21:51.247 00:21:51.247 real 1m24.248s 00:21:51.247 user 2m8.898s 00:21:51.247 sys 0m26.944s 00:21:51.247 23:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:51.247 23:58:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.247 ************************************ 00:21:51.247 END TEST nvmf_tls 00:21:51.247 ************************************ 00:21:51.247 23:58:05 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:51.247 23:58:05 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:51.247 23:58:05 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:51.247 23:58:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:51.247 23:58:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.247 ************************************ 00:21:51.247 START TEST nvmf_fips 00:21:51.247 ************************************ 00:21:51.247 23:58:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:51.247 * Looking for test storage... 00:21:51.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:51.247 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # local es=0 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@644 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@630 -- # local arg=openssl 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # type -t openssl 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # type -P openssl 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # arg=/usr/bin/openssl 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # [[ -x /usr/bin/openssl ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@645 -- # openssl md5 /dev/fd/62 00:21:51.248 Error setting digest 00:21:51.248 00A2E7B1797F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:51.248 00A2E7B1797F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@645 -- # es=1 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:51.248 23:58:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:59.393 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:59.393 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:59.393 Found net devices under 0000:31:00.0: cvl_0_0 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.393 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:59.394 Found net devices under 0000:31:00.1: cvl_0_1 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:59.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:21:59.394 00:21:59.394 --- 10.0.0.2 ping statistics --- 00:21:59.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.394 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:21:59.394 00:21:59.394 --- 10.0.0.1 ping statistics --- 00:21:59.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.394 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=513489 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 513489 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@823 -- # '[' -z 513489 ']' 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:59.394 23:58:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:59.656 [2024-07-15 23:58:14.605511] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:21:59.656 [2024-07-15 23:58:14.605588] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.656 [2024-07-15 23:58:14.702161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.656 [2024-07-15 23:58:14.793826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.656 [2024-07-15 23:58:14.793885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.656 [2024-07-15 23:58:14.793893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.656 [2024-07-15 23:58:14.793900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.656 [2024-07-15 23:58:14.793907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.656 [2024-07-15 23:58:14.793931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # return 0 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:00.362 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:00.363 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:00.623 [2024-07-15 23:58:15.565530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.623 [2024-07-15 23:58:15.581530] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:00.623 [2024-07-15 23:58:15.581772] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.623 [2024-07-15 23:58:15.611688] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:00.623 malloc0 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=513566 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 513566 /var/tmp/bdevperf.sock 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@823 -- # '[' -z 513566 ']' 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:00.623 23:58:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:00.623 [2024-07-15 23:58:15.704717] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:22:00.623 [2024-07-15 23:58:15.704790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513566 ] 00:22:00.624 [2024-07-15 23:58:15.766220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.884 [2024-07-15 23:58:15.830245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.458 23:58:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:01.458 23:58:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # return 0 00:22:01.458 23:58:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:01.458 [2024-07-15 23:58:16.598086] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.458 [2024-07-15 23:58:16.598148] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:01.720 TLSTESTn1 00:22:01.720 23:58:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.720 Running I/O for 10 seconds... 00:22:11.718 00:22:11.718 Latency(us) 00:22:11.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.718 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:11.718 Verification LBA range: start 0x0 length 0x2000 00:22:11.718 TLSTESTn1 : 10.03 4898.46 19.13 0.00 0.00 26078.42 4696.75 83886.08 00:22:11.718 =================================================================================================================== 00:22:11.718 Total : 4898.46 19.13 0.00 0.00 26078.42 4696.75 83886.08 00:22:11.718 0 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@800 -- # type=--id 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@801 -- # id=0 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # for n in $shm_files 00:22:11.718 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:11.718 nvmf_trace.0 00:22:11.979 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # return 0 00:22:11.979 23:58:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 513566 00:22:11.979 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@942 -- # '[' -z 513566 ']' 00:22:11.979 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # kill -0 513566 00:22:11.979 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # uname 00:22:11.979 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:11.979 23:58:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 513566 00:22:11.979 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:22:11.979 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:22:11.979 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@960 -- # echo 'killing process with pid 513566' 00:22:11.979 killing process with pid 513566 00:22:11.979 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@961 -- # kill 513566 00:22:11.979 Received shutdown signal, test time was about 10.000000 seconds 00:22:11.979 00:22:11.979 Latency(us) 00:22:11.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.979 =================================================================================================================== 00:22:11.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.979 [2024-07-15 23:58:27.005440] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:11.979 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # wait 513566 00:22:11.979 23:58:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:11.979 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.979 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:11.980 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.980 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:11.980 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.980 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.980 rmmod nvme_tcp 00:22:11.980 rmmod nvme_fabrics 00:22:11.980 rmmod nvme_keyring 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 513489 ']' 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 513489 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@942 -- # '[' -z 513489 ']' 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # kill -0 513489 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # uname 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 513489 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@960 -- # echo 'killing process with pid 513489' 00:22:12.241 killing process with pid 513489 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@961 -- # kill 513489 00:22:12.241 [2024-07-15 23:58:27.244685] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # wait 513489 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.241 23:58:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.792 23:58:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.792 23:58:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:14.792 00:22:14.792 real 0m23.505s 00:22:14.792 user 0m23.683s 00:22:14.792 sys 0m10.477s 00:22:14.792 23:58:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:14.792 23:58:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:14.792 ************************************ 00:22:14.792 END TEST nvmf_fips 00:22:14.792 ************************************ 00:22:14.792 23:58:29 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:14.792 23:58:29 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:14.792 23:58:29 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:14.792 23:58:29 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:14.792 23:58:29 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:14.792 23:58:29 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.792 23:58:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:22.935 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:22.935 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:22.935 Found net devices under 0000:31:00.0: cvl_0_0 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:22.935 Found net devices under 0000:31:00.1: cvl_0_1 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:22.935 23:58:37 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:22.935 23:58:37 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:22.935 23:58:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:22.935 23:58:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.935 ************************************ 00:22:22.935 START TEST nvmf_perf_adq 00:22:22.935 ************************************ 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:22.935 * Looking for test storage... 00:22:22.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.935 23:58:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:31.113 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:31.113 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:31.113 Found net devices under 0000:31:00.0: cvl_0_0 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:31.113 Found net devices under 0000:31:00.1: cvl_0_1 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:31.113 23:58:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:31.686 23:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:33.597 23:58:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:38.884 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:38.884 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:38.884 Found net devices under 0000:31:00.0: cvl_0_0 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:38.884 Found net devices under 0000:31:00.1: cvl_0_1 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.884 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.765 ms 00:22:38.885 00:22:38.885 --- 10.0.0.2 ping statistics --- 00:22:38.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.885 rtt min/avg/max/mdev = 0.765/0.765/0.765/0.000 ms 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:22:38.885 00:22:38.885 --- 10.0.0.1 ping statistics --- 00:22:38.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.885 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=526443 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 526443 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@823 -- # '[' -z 526443 ']' 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.885 23:58:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:38.885 [2024-07-15 23:58:53.884950] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:22:38.885 [2024-07-15 23:58:53.885016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.885 [2024-07-15 23:58:53.965310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.885 [2024-07-15 23:58:54.041169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.885 [2024-07-15 23:58:54.041210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.885 [2024-07-15 23:58:54.041218] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.885 [2024-07-15 23:58:54.041224] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.885 [2024-07-15 23:58:54.041239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.885 [2024-07-15 23:58:54.041374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.885 [2024-07-15 23:58:54.041576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.885 [2024-07-15 23:58:54.041706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.885 [2024-07-15 23:58:54.041710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # return 0 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.827 [2024-07-15 23:58:54.844267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.827 Malloc1 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.827 [2024-07-15 23:58:54.901014] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=526797 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:39.827 23:58:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:41.739 23:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:41.739 23:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:41.739 23:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.999 23:58:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:41.999 23:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:41.999 "tick_rate": 2400000000, 00:22:41.999 "poll_groups": [ 00:22:41.999 { 00:22:41.999 "name": "nvmf_tgt_poll_group_000", 00:22:41.999 "admin_qpairs": 1, 00:22:41.999 "io_qpairs": 1, 00:22:41.999 "current_admin_qpairs": 1, 00:22:41.999 "current_io_qpairs": 1, 00:22:41.999 "pending_bdev_io": 0, 00:22:41.999 "completed_nvme_io": 20359, 00:22:41.999 "transports": [ 00:22:41.999 { 00:22:41.999 "trtype": "TCP" 00:22:41.999 } 00:22:41.999 ] 00:22:41.999 }, 00:22:41.999 { 00:22:41.999 "name": "nvmf_tgt_poll_group_001", 00:22:41.999 "admin_qpairs": 0, 00:22:41.999 "io_qpairs": 1, 00:22:41.999 "current_admin_qpairs": 0, 00:22:41.999 "current_io_qpairs": 1, 00:22:41.999 "pending_bdev_io": 0, 00:22:41.999 "completed_nvme_io": 29400, 00:22:41.999 "transports": [ 00:22:41.999 { 00:22:41.999 "trtype": "TCP" 00:22:41.999 } 00:22:41.999 ] 00:22:41.999 }, 00:22:41.999 { 00:22:41.999 "name": "nvmf_tgt_poll_group_002", 00:22:41.999 "admin_qpairs": 0, 00:22:41.999 "io_qpairs": 1, 00:22:41.999 "current_admin_qpairs": 0, 00:22:41.999 "current_io_qpairs": 1, 00:22:41.999 "pending_bdev_io": 0, 00:22:41.999 "completed_nvme_io": 21619, 00:22:41.999 "transports": [ 00:22:41.999 { 00:22:41.999 "trtype": "TCP" 00:22:41.999 } 00:22:41.999 ] 00:22:41.999 }, 00:22:41.999 { 00:22:41.999 "name": "nvmf_tgt_poll_group_003", 00:22:41.999 "admin_qpairs": 0, 00:22:41.999 "io_qpairs": 1, 00:22:41.999 "current_admin_qpairs": 0, 00:22:41.999 "current_io_qpairs": 1, 00:22:41.999 "pending_bdev_io": 0, 00:22:41.999 "completed_nvme_io": 20703, 00:22:41.999 "transports": [ 00:22:41.999 { 00:22:41.999 "trtype": "TCP" 00:22:41.999 } 00:22:41.999 ] 00:22:41.999 } 00:22:41.999 ] 00:22:41.999 }' 00:22:41.999 23:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:41.999 23:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:41.999 23:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:41.999 23:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:41.999 23:58:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 526797 00:22:50.131 Initializing NVMe Controllers 00:22:50.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:50.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:50.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:50.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:50.131 Initialization complete. Launching workers. 00:22:50.131 ======================================================== 00:22:50.131 Latency(us) 00:22:50.131 Device Information : IOPS MiB/s Average min max 00:22:50.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14573.50 56.93 4391.97 1220.04 8618.80 00:22:50.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15410.00 60.20 4152.91 1151.74 9131.65 00:22:50.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13396.70 52.33 4777.34 1368.53 10918.50 00:22:50.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11355.70 44.36 5635.91 1400.92 10994.08 00:22:50.131 ======================================================== 00:22:50.131 Total : 54735.90 213.81 4677.06 1151.74 10994.08 00:22:50.131 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.131 rmmod nvme_tcp 00:22:50.131 rmmod nvme_fabrics 00:22:50.131 rmmod nvme_keyring 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 526443 ']' 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 526443 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@942 -- # '[' -z 526443 ']' 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # kill -0 526443 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # uname 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 526443 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@960 -- # echo 'killing process with pid 526443' 00:22:50.131 killing process with pid 526443 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@961 -- # kill 526443 00:22:50.131 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # wait 526443 00:22:50.392 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.392 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.392 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.392 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.392 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.392 23:59:05 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.392 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.392 23:59:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.306 23:59:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:52.306 23:59:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:52.306 23:59:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:54.218 23:59:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:55.603 23:59:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.886 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:00.886 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:00.887 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:00.887 Found net devices under 0000:31:00.0: cvl_0_0 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:00.887 Found net devices under 0000:31:00.1: cvl_0_1 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:23:00.887 00:23:00.887 --- 10.0.0.2 ping statistics --- 00:23:00.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.887 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:23:00.887 23:59:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:23:00.887 00:23:00.887 --- 10.0.0.1 ping statistics --- 00:23:00.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.887 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:00.887 net.core.busy_poll = 1 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:00.887 net.core.busy_read = 1 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:00.887 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=531262 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 531262 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@823 -- # '[' -z 531262 ']' 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:01.147 23:59:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.408 [2024-07-15 23:59:16.370673] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:23:01.408 [2024-07-15 23:59:16.370731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.408 [2024-07-15 23:59:16.449709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.408 [2024-07-15 23:59:16.521189] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.408 [2024-07-15 23:59:16.521235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.408 [2024-07-15 23:59:16.521243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.408 [2024-07-15 23:59:16.521250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.408 [2024-07-15 23:59:16.521255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.408 [2024-07-15 23:59:16.521397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.408 [2024-07-15 23:59:16.521511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.408 [2024-07-15 23:59:16.521666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.408 [2024-07-15 23:59:16.521668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.983 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:01.983 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # return 0 00:23:01.983 23:59:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:01.983 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.983 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.244 [2024-07-15 23:59:17.311589] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.244 Malloc1 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:02.244 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.245 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.245 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.245 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.245 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.245 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.245 [2024-07-15 23:59:17.371007] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.245 23:59:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.245 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=531556 00:23:02.245 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:02.245 23:59:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:04.806 "tick_rate": 2400000000, 00:23:04.806 "poll_groups": [ 00:23:04.806 { 00:23:04.806 "name": "nvmf_tgt_poll_group_000", 00:23:04.806 "admin_qpairs": 1, 00:23:04.806 "io_qpairs": 2, 00:23:04.806 "current_admin_qpairs": 1, 00:23:04.806 "current_io_qpairs": 2, 00:23:04.806 "pending_bdev_io": 0, 00:23:04.806 "completed_nvme_io": 29166, 00:23:04.806 "transports": [ 00:23:04.806 { 00:23:04.806 "trtype": "TCP" 00:23:04.806 } 00:23:04.806 ] 00:23:04.806 }, 00:23:04.806 { 00:23:04.806 "name": "nvmf_tgt_poll_group_001", 00:23:04.806 "admin_qpairs": 0, 00:23:04.806 "io_qpairs": 2, 00:23:04.806 "current_admin_qpairs": 0, 00:23:04.806 "current_io_qpairs": 2, 00:23:04.806 "pending_bdev_io": 0, 00:23:04.806 "completed_nvme_io": 42280, 00:23:04.806 "transports": [ 00:23:04.806 { 00:23:04.806 "trtype": "TCP" 00:23:04.806 } 00:23:04.806 ] 00:23:04.806 }, 00:23:04.806 { 00:23:04.806 "name": "nvmf_tgt_poll_group_002", 00:23:04.806 "admin_qpairs": 0, 00:23:04.806 "io_qpairs": 0, 00:23:04.806 "current_admin_qpairs": 0, 00:23:04.806 "current_io_qpairs": 0, 00:23:04.806 "pending_bdev_io": 0, 00:23:04.806 "completed_nvme_io": 0, 00:23:04.806 "transports": [ 00:23:04.806 { 00:23:04.806 "trtype": "TCP" 00:23:04.806 } 00:23:04.806 ] 00:23:04.806 }, 00:23:04.806 { 00:23:04.806 "name": "nvmf_tgt_poll_group_003", 00:23:04.806 "admin_qpairs": 0, 00:23:04.806 "io_qpairs": 0, 00:23:04.806 "current_admin_qpairs": 0, 00:23:04.806 "current_io_qpairs": 0, 00:23:04.806 "pending_bdev_io": 0, 00:23:04.806 "completed_nvme_io": 0, 00:23:04.806 "transports": [ 00:23:04.806 { 00:23:04.806 "trtype": "TCP" 00:23:04.806 } 00:23:04.806 ] 00:23:04.806 } 00:23:04.806 ] 00:23:04.806 }' 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:04.806 23:59:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 531556 00:23:12.968 Initializing NVMe Controllers 00:23:12.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:12.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:12.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:12.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:12.968 Initialization complete. Launching workers. 00:23:12.968 ======================================================== 00:23:12.968 Latency(us) 00:23:12.968 Device Information : IOPS MiB/s Average min max 00:23:12.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11770.80 45.98 5437.09 1366.29 49785.76 00:23:12.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9061.70 35.40 7063.59 1283.20 53384.25 00:23:12.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9696.00 37.87 6614.46 1391.07 53593.04 00:23:12.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9538.20 37.26 6728.90 1114.51 49540.13 00:23:12.968 ======================================================== 00:23:12.968 Total : 40066.70 156.51 6397.39 1114.51 53593.04 00:23:12.968 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.968 rmmod nvme_tcp 00:23:12.968 rmmod nvme_fabrics 00:23:12.968 rmmod nvme_keyring 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 531262 ']' 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 531262 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@942 -- # '[' -z 531262 ']' 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # kill -0 531262 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # uname 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 531262 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@960 -- # echo 'killing process with pid 531262' 00:23:12.968 killing process with pid 531262 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@961 -- # kill 531262 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # wait 531262 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.968 23:59:27 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.385 23:59:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.385 23:59:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:16.385 00:23:16.385 real 0m53.413s 00:23:16.385 user 2m49.749s 00:23:16.385 sys 0m10.868s 00:23:16.385 23:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:16.385 23:59:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.385 ************************************ 00:23:16.385 END TEST nvmf_perf_adq 00:23:16.385 ************************************ 00:23:16.385 23:59:30 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:23:16.385 23:59:30 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:16.385 23:59:30 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:23:16.385 23:59:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:16.385 23:59:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:16.385 ************************************ 00:23:16.385 START TEST nvmf_shutdown 00:23:16.385 ************************************ 00:23:16.385 23:59:30 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:16.385 * Looking for test storage... 00:23:16.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.385 ************************************ 00:23:16.385 START TEST nvmf_shutdown_tc1 00:23:16.385 ************************************ 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc1 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.385 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.386 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.386 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.386 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.386 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.386 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.386 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.386 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.386 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.386 23:59:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:24.528 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:24.528 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:24.528 Found net devices under 0000:31:00.0: cvl_0_0 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:24.528 Found net devices under 0000:31:00.1: cvl_0_1 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.528 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:23:24.529 00:23:24.529 --- 10.0.0.2 ping statistics --- 00:23:24.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.529 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:23:24.529 00:23:24.529 --- 10.0.0.1 ping statistics --- 00:23:24.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.529 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=538429 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 538429 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 538429 ']' 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:24.529 23:59:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:24.529 [2024-07-15 23:59:39.442212] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:23:24.529 [2024-07-15 23:59:39.442282] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.529 [2024-07-15 23:59:39.538529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.529 [2024-07-15 23:59:39.632974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.529 [2024-07-15 23:59:39.633036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.529 [2024-07-15 23:59:39.633044] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.529 [2024-07-15 23:59:39.633051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.529 [2024-07-15 23:59:39.633057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.529 [2024-07-15 23:59:39.633191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.529 [2024-07-15 23:59:39.633356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.529 [2024-07-15 23:59:39.633525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.529 [2024-07-15 23:59:39.633526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.101 [2024-07-15 23:59:40.272787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.101 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.362 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.362 Malloc1 00:23:25.362 [2024-07-15 23:59:40.376220] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.362 Malloc2 00:23:25.362 Malloc3 00:23:25.362 Malloc4 00:23:25.362 Malloc5 00:23:25.362 Malloc6 00:23:25.623 Malloc7 00:23:25.623 Malloc8 00:23:25.623 Malloc9 00:23:25.623 Malloc10 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=538807 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 538807 /var/tmp/bdevperf.sock 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 538807 ']' 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.623 { 00:23:25.623 "params": { 00:23:25.623 "name": "Nvme$subsystem", 00:23:25.623 "trtype": "$TEST_TRANSPORT", 00:23:25.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.623 "adrfam": "ipv4", 00:23:25.623 "trsvcid": "$NVMF_PORT", 00:23:25.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.623 "hdgst": ${hdgst:-false}, 00:23:25.623 "ddgst": ${ddgst:-false} 00:23:25.623 }, 00:23:25.623 "method": "bdev_nvme_attach_controller" 00:23:25.623 } 00:23:25.623 EOF 00:23:25.623 )") 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.623 { 00:23:25.623 "params": { 00:23:25.623 "name": "Nvme$subsystem", 00:23:25.623 "trtype": "$TEST_TRANSPORT", 00:23:25.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.623 "adrfam": "ipv4", 00:23:25.623 "trsvcid": "$NVMF_PORT", 00:23:25.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.623 "hdgst": ${hdgst:-false}, 00:23:25.623 "ddgst": ${ddgst:-false} 00:23:25.623 }, 00:23:25.623 "method": "bdev_nvme_attach_controller" 00:23:25.623 } 00:23:25.623 EOF 00:23:25.623 )") 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.623 { 00:23:25.623 "params": { 00:23:25.623 "name": "Nvme$subsystem", 00:23:25.623 "trtype": "$TEST_TRANSPORT", 00:23:25.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.623 "adrfam": "ipv4", 00:23:25.623 "trsvcid": "$NVMF_PORT", 00:23:25.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.623 "hdgst": ${hdgst:-false}, 00:23:25.623 "ddgst": ${ddgst:-false} 00:23:25.623 }, 00:23:25.623 "method": "bdev_nvme_attach_controller" 00:23:25.623 } 00:23:25.623 EOF 00:23:25.623 )") 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.623 { 00:23:25.623 "params": { 00:23:25.623 "name": "Nvme$subsystem", 00:23:25.623 "trtype": "$TEST_TRANSPORT", 00:23:25.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.623 "adrfam": "ipv4", 00:23:25.623 "trsvcid": "$NVMF_PORT", 00:23:25.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.623 "hdgst": ${hdgst:-false}, 00:23:25.623 "ddgst": ${ddgst:-false} 00:23:25.623 }, 00:23:25.623 "method": "bdev_nvme_attach_controller" 00:23:25.623 } 00:23:25.623 EOF 00:23:25.623 )") 00:23:25.623 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.884 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.884 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.884 { 00:23:25.884 "params": { 00:23:25.884 "name": "Nvme$subsystem", 00:23:25.884 "trtype": "$TEST_TRANSPORT", 00:23:25.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.884 "adrfam": "ipv4", 00:23:25.884 "trsvcid": "$NVMF_PORT", 00:23:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.884 "hdgst": ${hdgst:-false}, 00:23:25.884 "ddgst": ${ddgst:-false} 00:23:25.884 }, 00:23:25.884 "method": "bdev_nvme_attach_controller" 00:23:25.884 } 00:23:25.884 EOF 00:23:25.884 )") 00:23:25.884 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.885 { 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme$subsystem", 00:23:25.885 "trtype": "$TEST_TRANSPORT", 00:23:25.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "$NVMF_PORT", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.885 "hdgst": ${hdgst:-false}, 00:23:25.885 "ddgst": ${ddgst:-false} 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 } 00:23:25.885 EOF 00:23:25.885 )") 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.885 [2024-07-15 23:59:40.825164] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:23:25.885 [2024-07-15 23:59:40.825219] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.885 { 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme$subsystem", 00:23:25.885 "trtype": "$TEST_TRANSPORT", 00:23:25.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "$NVMF_PORT", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.885 "hdgst": ${hdgst:-false}, 00:23:25.885 "ddgst": ${ddgst:-false} 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 } 00:23:25.885 EOF 00:23:25.885 )") 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.885 { 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme$subsystem", 00:23:25.885 "trtype": "$TEST_TRANSPORT", 00:23:25.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "$NVMF_PORT", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.885 "hdgst": ${hdgst:-false}, 00:23:25.885 "ddgst": ${ddgst:-false} 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 } 00:23:25.885 EOF 00:23:25.885 )") 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.885 { 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme$subsystem", 00:23:25.885 "trtype": "$TEST_TRANSPORT", 00:23:25.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "$NVMF_PORT", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.885 "hdgst": ${hdgst:-false}, 00:23:25.885 "ddgst": ${ddgst:-false} 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 } 00:23:25.885 EOF 00:23:25.885 )") 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:25.885 { 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme$subsystem", 00:23:25.885 "trtype": "$TEST_TRANSPORT", 00:23:25.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "$NVMF_PORT", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:25.885 "hdgst": ${hdgst:-false}, 00:23:25.885 "ddgst": ${ddgst:-false} 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 } 00:23:25.885 EOF 00:23:25.885 )") 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:25.885 23:59:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme1", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 },{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme2", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 },{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme3", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 },{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme4", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 },{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme5", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 },{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme6", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 },{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme7", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 },{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme8", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 },{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme9", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 },{ 00:23:25.885 "params": { 00:23:25.885 "name": "Nvme10", 00:23:25.885 "trtype": "tcp", 00:23:25.885 "traddr": "10.0.0.2", 00:23:25.885 "adrfam": "ipv4", 00:23:25.885 "trsvcid": "4420", 00:23:25.885 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:25.885 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:25.885 "hdgst": false, 00:23:25.885 "ddgst": false 00:23:25.885 }, 00:23:25.885 "method": "bdev_nvme_attach_controller" 00:23:25.885 }' 00:23:25.885 [2024-07-15 23:59:40.892434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.886 [2024-07-15 23:59:40.957156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.270 23:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:27.270 23:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:23:27.270 23:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:27.270 23:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.270 23:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:27.270 23:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.270 23:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 538807 00:23:27.270 23:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:27.270 23:59:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:28.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 538807 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 538429 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.213 { 00:23:28.213 "params": { 00:23:28.213 "name": "Nvme$subsystem", 00:23:28.213 "trtype": "$TEST_TRANSPORT", 00:23:28.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.213 "adrfam": "ipv4", 00:23:28.213 "trsvcid": "$NVMF_PORT", 00:23:28.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.213 "hdgst": ${hdgst:-false}, 00:23:28.213 "ddgst": ${ddgst:-false} 00:23:28.213 }, 00:23:28.213 "method": "bdev_nvme_attach_controller" 00:23:28.213 } 00:23:28.213 EOF 00:23:28.213 )") 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.213 { 00:23:28.213 "params": { 00:23:28.213 "name": "Nvme$subsystem", 00:23:28.213 "trtype": "$TEST_TRANSPORT", 00:23:28.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.213 "adrfam": "ipv4", 00:23:28.213 "trsvcid": "$NVMF_PORT", 00:23:28.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.213 "hdgst": ${hdgst:-false}, 00:23:28.213 "ddgst": ${ddgst:-false} 00:23:28.213 }, 00:23:28.213 "method": "bdev_nvme_attach_controller" 00:23:28.213 } 00:23:28.213 EOF 00:23:28.213 )") 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.213 { 00:23:28.213 "params": { 00:23:28.213 "name": "Nvme$subsystem", 00:23:28.213 "trtype": "$TEST_TRANSPORT", 00:23:28.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.213 "adrfam": "ipv4", 00:23:28.213 "trsvcid": "$NVMF_PORT", 00:23:28.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.213 "hdgst": ${hdgst:-false}, 00:23:28.213 "ddgst": ${ddgst:-false} 00:23:28.213 }, 00:23:28.213 "method": "bdev_nvme_attach_controller" 00:23:28.213 } 00:23:28.213 EOF 00:23:28.213 )") 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.213 { 00:23:28.213 "params": { 00:23:28.213 "name": "Nvme$subsystem", 00:23:28.213 "trtype": "$TEST_TRANSPORT", 00:23:28.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.213 "adrfam": "ipv4", 00:23:28.213 "trsvcid": "$NVMF_PORT", 00:23:28.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.213 "hdgst": ${hdgst:-false}, 00:23:28.213 "ddgst": ${ddgst:-false} 00:23:28.213 }, 00:23:28.213 "method": "bdev_nvme_attach_controller" 00:23:28.213 } 00:23:28.213 EOF 00:23:28.213 )") 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.213 { 00:23:28.213 "params": { 00:23:28.213 "name": "Nvme$subsystem", 00:23:28.213 "trtype": "$TEST_TRANSPORT", 00:23:28.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.213 "adrfam": "ipv4", 00:23:28.213 "trsvcid": "$NVMF_PORT", 00:23:28.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.213 "hdgst": ${hdgst:-false}, 00:23:28.213 "ddgst": ${ddgst:-false} 00:23:28.213 }, 00:23:28.213 "method": "bdev_nvme_attach_controller" 00:23:28.213 } 00:23:28.213 EOF 00:23:28.213 )") 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.213 { 00:23:28.213 "params": { 00:23:28.213 "name": "Nvme$subsystem", 00:23:28.213 "trtype": "$TEST_TRANSPORT", 00:23:28.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.213 "adrfam": "ipv4", 00:23:28.213 "trsvcid": "$NVMF_PORT", 00:23:28.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.213 "hdgst": ${hdgst:-false}, 00:23:28.213 "ddgst": ${ddgst:-false} 00:23:28.213 }, 00:23:28.213 "method": "bdev_nvme_attach_controller" 00:23:28.213 } 00:23:28.213 EOF 00:23:28.213 )") 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.213 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.213 { 00:23:28.213 "params": { 00:23:28.213 "name": "Nvme$subsystem", 00:23:28.213 "trtype": "$TEST_TRANSPORT", 00:23:28.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.213 "adrfam": "ipv4", 00:23:28.213 "trsvcid": "$NVMF_PORT", 00:23:28.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.213 "hdgst": ${hdgst:-false}, 00:23:28.213 "ddgst": ${ddgst:-false} 00:23:28.213 }, 00:23:28.213 "method": "bdev_nvme_attach_controller" 00:23:28.213 } 00:23:28.213 EOF 00:23:28.213 )") 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.473 [2024-07-15 23:59:43.407019] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:23:28.473 [2024-07-15 23:59:43.407072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid539213 ] 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.473 { 00:23:28.473 "params": { 00:23:28.473 "name": "Nvme$subsystem", 00:23:28.473 "trtype": "$TEST_TRANSPORT", 00:23:28.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.473 "adrfam": "ipv4", 00:23:28.473 "trsvcid": "$NVMF_PORT", 00:23:28.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.473 "hdgst": ${hdgst:-false}, 00:23:28.473 "ddgst": ${ddgst:-false} 00:23:28.473 }, 00:23:28.473 "method": "bdev_nvme_attach_controller" 00:23:28.473 } 00:23:28.473 EOF 00:23:28.473 )") 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.473 { 00:23:28.473 "params": { 00:23:28.473 "name": "Nvme$subsystem", 00:23:28.473 "trtype": "$TEST_TRANSPORT", 00:23:28.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.473 "adrfam": "ipv4", 00:23:28.473 "trsvcid": "$NVMF_PORT", 00:23:28.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.473 "hdgst": ${hdgst:-false}, 00:23:28.473 "ddgst": ${ddgst:-false} 00:23:28.473 }, 00:23:28.473 "method": "bdev_nvme_attach_controller" 00:23:28.473 } 00:23:28.473 EOF 00:23:28.473 )") 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.473 { 00:23:28.473 "params": { 00:23:28.473 "name": "Nvme$subsystem", 00:23:28.473 "trtype": "$TEST_TRANSPORT", 00:23:28.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.473 "adrfam": "ipv4", 00:23:28.473 "trsvcid": "$NVMF_PORT", 00:23:28.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.473 "hdgst": ${hdgst:-false}, 00:23:28.473 "ddgst": ${ddgst:-false} 00:23:28.473 }, 00:23:28.473 "method": "bdev_nvme_attach_controller" 00:23:28.473 } 00:23:28.473 EOF 00:23:28.473 )") 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:28.473 23:59:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:28.473 "params": { 00:23:28.473 "name": "Nvme1", 00:23:28.473 "trtype": "tcp", 00:23:28.473 "traddr": "10.0.0.2", 00:23:28.473 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 },{ 00:23:28.474 "params": { 00:23:28.474 "name": "Nvme2", 00:23:28.474 "trtype": "tcp", 00:23:28.474 "traddr": "10.0.0.2", 00:23:28.474 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 },{ 00:23:28.474 "params": { 00:23:28.474 "name": "Nvme3", 00:23:28.474 "trtype": "tcp", 00:23:28.474 "traddr": "10.0.0.2", 00:23:28.474 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 },{ 00:23:28.474 "params": { 00:23:28.474 "name": "Nvme4", 00:23:28.474 "trtype": "tcp", 00:23:28.474 "traddr": "10.0.0.2", 00:23:28.474 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 },{ 00:23:28.474 "params": { 00:23:28.474 "name": "Nvme5", 00:23:28.474 "trtype": "tcp", 00:23:28.474 "traddr": "10.0.0.2", 00:23:28.474 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 },{ 00:23:28.474 "params": { 00:23:28.474 "name": "Nvme6", 00:23:28.474 "trtype": "tcp", 00:23:28.474 "traddr": "10.0.0.2", 00:23:28.474 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 },{ 00:23:28.474 "params": { 00:23:28.474 "name": "Nvme7", 00:23:28.474 "trtype": "tcp", 00:23:28.474 "traddr": "10.0.0.2", 00:23:28.474 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 },{ 00:23:28.474 "params": { 00:23:28.474 "name": "Nvme8", 00:23:28.474 "trtype": "tcp", 00:23:28.474 "traddr": "10.0.0.2", 00:23:28.474 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 },{ 00:23:28.474 "params": { 00:23:28.474 "name": "Nvme9", 00:23:28.474 "trtype": "tcp", 00:23:28.474 "traddr": "10.0.0.2", 00:23:28.474 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 },{ 00:23:28.474 "params": { 00:23:28.474 "name": "Nvme10", 00:23:28.474 "trtype": "tcp", 00:23:28.474 "traddr": "10.0.0.2", 00:23:28.474 "adrfam": "ipv4", 00:23:28.474 "trsvcid": "4420", 00:23:28.474 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:28.474 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:28.474 "hdgst": false, 00:23:28.474 "ddgst": false 00:23:28.474 }, 00:23:28.474 "method": "bdev_nvme_attach_controller" 00:23:28.474 }' 00:23:28.474 [2024-07-15 23:59:43.475164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.474 [2024-07-15 23:59:43.539689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.857 Running I/O for 1 seconds... 00:23:30.800 00:23:30.800 Latency(us) 00:23:30.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.800 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme1n1 : 1.11 230.31 14.39 0.00 0.00 275100.16 16711.68 270882.13 00:23:30.800 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme2n1 : 1.12 232.97 14.56 0.00 0.00 264765.86 12014.93 242920.11 00:23:30.800 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme3n1 : 1.11 229.74 14.36 0.00 0.00 266228.69 19442.35 290106.03 00:23:30.800 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme4n1 : 1.15 277.73 17.36 0.00 0.00 213147.65 17803.95 242920.11 00:23:30.800 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme5n1 : 1.12 227.82 14.24 0.00 0.00 259194.45 16493.23 227191.47 00:23:30.800 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme6n1 : 1.18 270.72 16.92 0.00 0.00 215065.09 19005.44 242920.11 00:23:30.800 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme7n1 : 1.13 227.26 14.20 0.00 0.00 250155.09 32549.55 230686.72 00:23:30.800 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme8n1 : 1.19 269.13 16.82 0.00 0.00 208998.57 17585.49 239424.85 00:23:30.800 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme9n1 : 1.21 265.54 16.60 0.00 0.00 208311.64 14417.92 255153.49 00:23:30.800 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.800 Verification LBA range: start 0x0 length 0x400 00:23:30.800 Nvme10n1 : 1.21 265.34 16.58 0.00 0.00 204569.26 14745.60 265639.25 00:23:30.800 =================================================================================================================== 00:23:30.800 Total : 2496.58 156.04 0.00 0.00 233648.51 12014.93 290106.03 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.061 rmmod nvme_tcp 00:23:31.061 rmmod nvme_fabrics 00:23:31.061 rmmod nvme_keyring 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 538429 ']' 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 538429 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@942 -- # '[' -z 538429 ']' 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # kill -0 538429 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # uname 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 538429 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 538429' 00:23:31.061 killing process with pid 538429 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@961 -- # kill 538429 00:23:31.061 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # wait 538429 00:23:31.322 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:31.322 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:31.322 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:31.322 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.322 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.322 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.322 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.322 23:59:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.871 00:23:33.871 real 0m17.331s 00:23:33.871 user 0m33.384s 00:23:33.871 sys 0m7.193s 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:33.871 ************************************ 00:23:33.871 END TEST nvmf_shutdown_tc1 00:23:33.871 ************************************ 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:33.871 ************************************ 00:23:33.871 START TEST nvmf_shutdown_tc2 00:23:33.871 ************************************ 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc2 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.871 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:33.872 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:33.872 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:33.872 Found net devices under 0000:31:00.0: cvl_0_0 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:33.872 Found net devices under 0000:31:00.1: cvl_0_1 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:33.872 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:33.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:23:33.873 00:23:33.873 --- 10.0.0.2 ping statistics --- 00:23:33.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.873 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:33.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:23:33.873 00:23:33.873 --- 10.0.0.1 ping statistics --- 00:23:33.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.873 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=540522 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 540522 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 540522 ']' 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:33.873 23:59:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.873 [2024-07-15 23:59:49.029880] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:23:33.873 [2024-07-15 23:59:49.029949] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.134 [2024-07-15 23:59:49.109419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.134 [2024-07-15 23:59:49.177289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.134 [2024-07-15 23:59:49.177330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.134 [2024-07-15 23:59:49.177336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.135 [2024-07-15 23:59:49.177340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.135 [2024-07-15 23:59:49.177345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.135 [2024-07-15 23:59:49.177479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.135 [2024-07-15 23:59:49.177699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.135 [2024-07-15 23:59:49.177859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.135 [2024-07-15 23:59:49.177860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.707 [2024-07-15 23:59:49.845674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.707 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.968 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.968 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.968 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:34.968 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:34.968 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:34.968 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.968 23:59:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.968 Malloc1 00:23:34.968 [2024-07-15 23:59:49.944612] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.968 Malloc2 00:23:34.968 Malloc3 00:23:34.968 Malloc4 00:23:34.968 Malloc5 00:23:34.968 Malloc6 00:23:34.968 Malloc7 00:23:35.229 Malloc8 00:23:35.229 Malloc9 00:23:35.229 Malloc10 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=540729 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 540729 /var/tmp/bdevperf.sock 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 540729 ']' 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.229 [2024-07-15 23:59:50.394860] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:23:35.229 [2024-07-15 23:59:50.394922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid540729 ] 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:35.229 { 00:23:35.229 "params": { 00:23:35.229 "name": "Nvme$subsystem", 00:23:35.229 "trtype": "$TEST_TRANSPORT", 00:23:35.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.229 "adrfam": "ipv4", 00:23:35.229 "trsvcid": "$NVMF_PORT", 00:23:35.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.229 "hdgst": ${hdgst:-false}, 00:23:35.229 "ddgst": ${ddgst:-false} 00:23:35.229 }, 00:23:35.229 "method": "bdev_nvme_attach_controller" 00:23:35.229 } 00:23:35.229 EOF 00:23:35.229 )") 00:23:35.229 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:35.490 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:35.490 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:35.490 23:59:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme1", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 },{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme2", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 },{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme3", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 },{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme4", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 },{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme5", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 },{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme6", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 },{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme7", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 },{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme8", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 },{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme9", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 },{ 00:23:35.490 "params": { 00:23:35.490 "name": "Nvme10", 00:23:35.490 "trtype": "tcp", 00:23:35.490 "traddr": "10.0.0.2", 00:23:35.490 "adrfam": "ipv4", 00:23:35.490 "trsvcid": "4420", 00:23:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:35.490 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:35.490 "hdgst": false, 00:23:35.490 "ddgst": false 00:23:35.490 }, 00:23:35.490 "method": "bdev_nvme_attach_controller" 00:23:35.490 }' 00:23:35.490 [2024-07-15 23:59:50.461786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.490 [2024-07-15 23:59:50.526706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.872 Running I/O for 10 seconds... 00:23:36.872 23:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:36.872 23:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:23:36.872 23:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:36.872 23:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.872 23:59:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:37.132 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:37.393 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 540729 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 540729 ']' 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 540729 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 540729 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 540729' 00:23:37.654 killing process with pid 540729 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 540729 00:23:37.654 23:59:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 540729 00:23:37.914 Received shutdown signal, test time was about 0.986427 seconds 00:23:37.914 00:23:37.914 Latency(us) 00:23:37.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.914 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.914 Verification LBA range: start 0x0 length 0x400 00:23:37.914 Nvme1n1 : 0.92 209.21 13.08 0.00 0.00 300365.94 45219.84 237677.23 00:23:37.914 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.914 Verification LBA range: start 0x0 length 0x400 00:23:37.915 Nvme2n1 : 0.93 206.91 12.93 0.00 0.00 299394.84 37137.07 260396.37 00:23:37.915 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.915 Verification LBA range: start 0x0 length 0x400 00:23:37.915 Nvme3n1 : 0.94 271.62 16.98 0.00 0.00 222932.05 21845.33 235929.60 00:23:37.915 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.915 Verification LBA range: start 0x0 length 0x400 00:23:37.915 Nvme4n1 : 0.94 272.85 17.05 0.00 0.00 217457.07 18896.21 249910.61 00:23:37.915 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.915 Verification LBA range: start 0x0 length 0x400 00:23:37.915 Nvme5n1 : 0.99 259.76 16.23 0.00 0.00 214763.52 13161.81 253405.87 00:23:37.915 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.915 Verification LBA range: start 0x0 length 0x400 00:23:37.915 Nvme6n1 : 0.92 209.46 13.09 0.00 0.00 269818.88 32112.64 239424.85 00:23:37.915 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.915 Verification LBA range: start 0x0 length 0x400 00:23:37.915 Nvme7n1 : 0.92 209.73 13.11 0.00 0.00 262893.23 15073.28 248162.99 00:23:37.915 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.915 Verification LBA range: start 0x0 length 0x400 00:23:37.915 Nvme8n1 : 0.93 275.22 17.20 0.00 0.00 196439.25 13707.95 249910.61 00:23:37.915 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.915 Verification LBA range: start 0x0 length 0x400 00:23:37.915 Nvme9n1 : 0.92 207.60 12.98 0.00 0.00 253871.50 20206.93 269134.51 00:23:37.915 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:37.915 Verification LBA range: start 0x0 length 0x400 00:23:37.915 Nvme10n1 : 0.94 273.79 17.11 0.00 0.00 188226.13 17476.27 244667.73 00:23:37.915 =================================================================================================================== 00:23:37.915 Total : 2396.15 149.76 0.00 0.00 237665.87 13161.81 269134.51 00:23:37.915 23:59:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:38.855 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 540522 00:23:38.855 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:38.855 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:38.855 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.116 rmmod nvme_tcp 00:23:39.116 rmmod nvme_fabrics 00:23:39.116 rmmod nvme_keyring 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 540522 ']' 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 540522 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 540522 ']' 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 540522 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 540522 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 540522' 00:23:39.116 killing process with pid 540522 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 540522 00:23:39.116 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 540522 00:23:39.377 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.377 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.377 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.377 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.377 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.377 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.377 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.377 23:59:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.288 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.549 00:23:41.549 real 0m7.896s 00:23:41.549 user 0m23.689s 00:23:41.549 sys 0m1.251s 00:23:41.549 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:41.549 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:41.549 ************************************ 00:23:41.550 END TEST nvmf_shutdown_tc2 00:23:41.550 ************************************ 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:41.550 ************************************ 00:23:41.550 START TEST nvmf_shutdown_tc3 00:23:41.550 ************************************ 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc3 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:41.550 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:41.550 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:41.550 Found net devices under 0000:31:00.0: cvl_0_0 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:41.550 Found net devices under 0000:31:00.1: cvl_0_1 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.550 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.551 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.551 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.551 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.551 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:23:41.811 00:23:41.811 --- 10.0.0.2 ping statistics --- 00:23:41.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.811 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.462 ms 00:23:41.811 00:23:41.811 --- 10.0.0.1 ping statistics --- 00:23:41.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.811 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=542133 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 542133 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 542133 ']' 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:41.811 23:59:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.072 [2024-07-15 23:59:57.003817] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:23:42.072 [2024-07-15 23:59:57.003884] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.072 [2024-07-15 23:59:57.099024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.072 [2024-07-15 23:59:57.159633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.072 [2024-07-15 23:59:57.159669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.072 [2024-07-15 23:59:57.159674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.072 [2024-07-15 23:59:57.159679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.072 [2024-07-15 23:59:57.159683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.072 [2024-07-15 23:59:57.159801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.072 [2024-07-15 23:59:57.159963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.072 [2024-07-15 23:59:57.160095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.072 [2024-07-15 23:59:57.160097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.642 [2024-07-15 23:59:57.818434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.642 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:42.902 23:59:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.902 Malloc1 00:23:42.902 [2024-07-15 23:59:57.917232] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.902 Malloc2 00:23:42.902 Malloc3 00:23:42.902 Malloc4 00:23:42.902 Malloc5 00:23:42.902 Malloc6 00:23:43.162 Malloc7 00:23:43.162 Malloc8 00:23:43.162 Malloc9 00:23:43.162 Malloc10 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=542508 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 542508 /var/tmp/bdevperf.sock 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 542508 ']' 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:43.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.162 { 00:23:43.162 "params": { 00:23:43.162 "name": "Nvme$subsystem", 00:23:43.162 "trtype": "$TEST_TRANSPORT", 00:23:43.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.162 "adrfam": "ipv4", 00:23:43.162 "trsvcid": "$NVMF_PORT", 00:23:43.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.162 "hdgst": ${hdgst:-false}, 00:23:43.162 "ddgst": ${ddgst:-false} 00:23:43.162 }, 00:23:43.162 "method": "bdev_nvme_attach_controller" 00:23:43.162 } 00:23:43.162 EOF 00:23:43.162 )") 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.162 { 00:23:43.162 "params": { 00:23:43.162 "name": "Nvme$subsystem", 00:23:43.162 "trtype": "$TEST_TRANSPORT", 00:23:43.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.162 "adrfam": "ipv4", 00:23:43.162 "trsvcid": "$NVMF_PORT", 00:23:43.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.162 "hdgst": ${hdgst:-false}, 00:23:43.162 "ddgst": ${ddgst:-false} 00:23:43.162 }, 00:23:43.162 "method": "bdev_nvme_attach_controller" 00:23:43.162 } 00:23:43.162 EOF 00:23:43.162 )") 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.162 { 00:23:43.162 "params": { 00:23:43.162 "name": "Nvme$subsystem", 00:23:43.162 "trtype": "$TEST_TRANSPORT", 00:23:43.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.162 "adrfam": "ipv4", 00:23:43.162 "trsvcid": "$NVMF_PORT", 00:23:43.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.162 "hdgst": ${hdgst:-false}, 00:23:43.162 "ddgst": ${ddgst:-false} 00:23:43.162 }, 00:23:43.162 "method": "bdev_nvme_attach_controller" 00:23:43.162 } 00:23:43.162 EOF 00:23:43.162 )") 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.162 { 00:23:43.162 "params": { 00:23:43.162 "name": "Nvme$subsystem", 00:23:43.162 "trtype": "$TEST_TRANSPORT", 00:23:43.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.162 "adrfam": "ipv4", 00:23:43.162 "trsvcid": "$NVMF_PORT", 00:23:43.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.162 "hdgst": ${hdgst:-false}, 00:23:43.162 "ddgst": ${ddgst:-false} 00:23:43.162 }, 00:23:43.162 "method": "bdev_nvme_attach_controller" 00:23:43.162 } 00:23:43.162 EOF 00:23:43.162 )") 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.162 { 00:23:43.162 "params": { 00:23:43.162 "name": "Nvme$subsystem", 00:23:43.162 "trtype": "$TEST_TRANSPORT", 00:23:43.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.162 "adrfam": "ipv4", 00:23:43.162 "trsvcid": "$NVMF_PORT", 00:23:43.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.162 "hdgst": ${hdgst:-false}, 00:23:43.162 "ddgst": ${ddgst:-false} 00:23:43.162 }, 00:23:43.162 "method": "bdev_nvme_attach_controller" 00:23:43.162 } 00:23:43.162 EOF 00:23:43.162 )") 00:23:43.162 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.422 { 00:23:43.422 "params": { 00:23:43.422 "name": "Nvme$subsystem", 00:23:43.422 "trtype": "$TEST_TRANSPORT", 00:23:43.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.422 "adrfam": "ipv4", 00:23:43.422 "trsvcid": "$NVMF_PORT", 00:23:43.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.422 "hdgst": ${hdgst:-false}, 00:23:43.422 "ddgst": ${ddgst:-false} 00:23:43.422 }, 00:23:43.422 "method": "bdev_nvme_attach_controller" 00:23:43.422 } 00:23:43.422 EOF 00:23:43.422 )") 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.422 [2024-07-15 23:59:58.358286] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:23:43.422 [2024-07-15 23:59:58.358338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542508 ] 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.422 { 00:23:43.422 "params": { 00:23:43.422 "name": "Nvme$subsystem", 00:23:43.422 "trtype": "$TEST_TRANSPORT", 00:23:43.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.422 "adrfam": "ipv4", 00:23:43.422 "trsvcid": "$NVMF_PORT", 00:23:43.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.422 "hdgst": ${hdgst:-false}, 00:23:43.422 "ddgst": ${ddgst:-false} 00:23:43.422 }, 00:23:43.422 "method": "bdev_nvme_attach_controller" 00:23:43.422 } 00:23:43.422 EOF 00:23:43.422 )") 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.422 { 00:23:43.422 "params": { 00:23:43.422 "name": "Nvme$subsystem", 00:23:43.422 "trtype": "$TEST_TRANSPORT", 00:23:43.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.422 "adrfam": "ipv4", 00:23:43.422 "trsvcid": "$NVMF_PORT", 00:23:43.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.422 "hdgst": ${hdgst:-false}, 00:23:43.422 "ddgst": ${ddgst:-false} 00:23:43.422 }, 00:23:43.422 "method": "bdev_nvme_attach_controller" 00:23:43.422 } 00:23:43.422 EOF 00:23:43.422 )") 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.422 { 00:23:43.422 "params": { 00:23:43.422 "name": "Nvme$subsystem", 00:23:43.422 "trtype": "$TEST_TRANSPORT", 00:23:43.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.422 "adrfam": "ipv4", 00:23:43.422 "trsvcid": "$NVMF_PORT", 00:23:43.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.422 "hdgst": ${hdgst:-false}, 00:23:43.422 "ddgst": ${ddgst:-false} 00:23:43.422 }, 00:23:43.422 "method": "bdev_nvme_attach_controller" 00:23:43.422 } 00:23:43.422 EOF 00:23:43.422 )") 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.422 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.422 { 00:23:43.422 "params": { 00:23:43.422 "name": "Nvme$subsystem", 00:23:43.422 "trtype": "$TEST_TRANSPORT", 00:23:43.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.422 "adrfam": "ipv4", 00:23:43.422 "trsvcid": "$NVMF_PORT", 00:23:43.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.423 "hdgst": ${hdgst:-false}, 00:23:43.423 "ddgst": ${ddgst:-false} 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 } 00:23:43.423 EOF 00:23:43.423 )") 00:23:43.423 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:43.423 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:43.423 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:43.423 23:59:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme1", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 },{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme2", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 },{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme3", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 },{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme4", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 },{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme5", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 },{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme6", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 },{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme7", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 },{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme8", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 },{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme9", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 },{ 00:23:43.423 "params": { 00:23:43.423 "name": "Nvme10", 00:23:43.423 "trtype": "tcp", 00:23:43.423 "traddr": "10.0.0.2", 00:23:43.423 "adrfam": "ipv4", 00:23:43.423 "trsvcid": "4420", 00:23:43.423 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:43.423 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:43.423 "hdgst": false, 00:23:43.423 "ddgst": false 00:23:43.423 }, 00:23:43.423 "method": "bdev_nvme_attach_controller" 00:23:43.423 }' 00:23:43.423 [2024-07-15 23:59:58.425192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.423 [2024-07-15 23:59:58.490374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.804 Running I/O for 10 seconds... 00:23:44.804 23:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:44.804 23:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:23:44.804 23:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:44.804 23:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:44.804 23:59:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:45.064 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:45.323 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:45.582 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:45.582 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:45.582 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:45.582 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:45.582 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:45.582 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.582 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 542133 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@942 -- # '[' -z 542133 ']' 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # kill -0 542133 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # uname 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 542133 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 542133' 00:23:45.858 killing process with pid 542133 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@961 -- # kill 542133 00:23:45.858 00:00:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # wait 542133 00:23:45.858 [2024-07-16 00:00:00.835961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.858 [2024-07-16 00:00:00.836015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.858 [2024-07-16 00:00:00.836021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.858 [2024-07-16 00:00:00.836026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.858 [2024-07-16 00:00:00.836031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.836299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e260 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.859 [2024-07-16 00:00:00.837261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.837444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780c60 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838238] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838401] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.860 [2024-07-16 00:00:00.838412] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838466] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.838516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177e700 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.839308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177eba0 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.839332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177eba0 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.839341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177eba0 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.839346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177eba0 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840364] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.861 [2024-07-16 00:00:00.840382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.840388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f520 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.840987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.840996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841106] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x177f9c0 is same with the state(5) to be set 00:23:45.862 [2024-07-16 00:00:00.841896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.862 [2024-07-16 00:00:00.841932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.862 [2024-07-16 00:00:00.841949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.862 [2024-07-16 00:00:00.841957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.841966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.841974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.841988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.841995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.863 [2024-07-16 00:00:00.842360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.863 [2024-07-16 00:00:00.842367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with [2024-07-16 00:00:00.842471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:45.864 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-07-16 00:00:00.842484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with [2024-07-16 00:00:00.842554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:45.864 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-07-16 00:00:00.842567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with [2024-07-16 00:00:00.842585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1the state(5) to be set 00:23:45.864 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 [2024-07-16 00:00:00.842619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:00:00.842634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.864 the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.864 [2024-07-16 00:00:00.842644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.864 [2024-07-16 00:00:00.842647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with [2024-07-16 00:00:00.842699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128the state(5) to be set 00:23:45.865 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-16 00:00:00.842728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-07-16 00:00:00.842739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1780300 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.842782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.842989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.842997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.843007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.843016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.843025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.843032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.843041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.865 [2024-07-16 00:00:00.843048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.865 [2024-07-16 00:00:00.843077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:45.865 [2024-07-16 00:00:00.843123] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12a4d90 was disconnected and freed. reset controller. 00:23:45.865 [2024-07-16 00:00:00.843224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.843244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.843249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.843254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.843259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.843263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.843268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.843273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.865 [2024-07-16 00:00:00.843277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.866 [2024-07-16 00:00:00.843602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.843993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.843999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.866 [2024-07-16 00:00:00.844177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.866 [2024-07-16 00:00:00.844184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.867 [2024-07-16 00:00:00.844682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:45.867 [2024-07-16 00:00:00.844744] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1401310 was disconnected and freed. reset controller. 00:23:45.867 [2024-07-16 00:00:00.844904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.844918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.844934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.844949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.844957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.844966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.845004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc9d0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.845064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.845097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.845146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.845194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.845260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.845298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.845347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.845396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.845444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475000 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.845518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.855761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.855910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17807a0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.862992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147dc10 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.863140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e6970 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.863228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecbc0 is same with the state(5) to be set 00:23:45.867 [2024-07-16 00:00:00.863338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.867 [2024-07-16 00:00:00.863370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.867 [2024-07-16 00:00:00.863377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448cb0 is same with the state(5) to be set 00:23:45.868 [2024-07-16 00:00:00.863427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdad610 is same with the state(5) to be set 00:23:45.868 [2024-07-16 00:00:00.863514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475fd0 is same with the state(5) to be set 00:23:45.868 [2024-07-16 00:00:00.863595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.863647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.863654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa5d0 is same with the state(5) to be set 00:23:45.868 [2024-07-16 00:00:00.866563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:45.868 [2024-07-16 00:00:00.866598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:45.868 [2024-07-16 00:00:00.866613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc9d0 (9): Bad file descriptor 00:23:45.868 [2024-07-16 00:00:00.866627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e6970 (9): Bad file descriptor 00:23:45.868 [2024-07-16 00:00:00.866653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1475000 (9): Bad file descriptor 00:23:45.868 [2024-07-16 00:00:00.866682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.866693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.866703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.866712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.866722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.866729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.866737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.868 [2024-07-16 00:00:00.866744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.866751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145ee20 is same with the state(5) to be set 00:23:45.868 [2024-07-16 00:00:00.866769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147dc10 (9): Bad file descriptor 00:23:45.868 [2024-07-16 00:00:00.866787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ecbc0 (9): Bad file descriptor 00:23:45.868 [2024-07-16 00:00:00.866802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1448cb0 (9): Bad file descriptor 00:23:45.868 [2024-07-16 00:00:00.866814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad610 (9): Bad file descriptor 00:23:45.868 [2024-07-16 00:00:00.866831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1475fd0 (9): Bad file descriptor 00:23:45.868 [2024-07-16 00:00:00.866846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa5d0 (9): Bad file descriptor 00:23:45.868 [2024-07-16 00:00:00.866968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.866980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.866992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.868 [2024-07-16 00:00:00.867911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.868 [2024-07-16 00:00:00.867918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.867927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.867934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.867943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.867950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.867959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.867966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.867975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.867982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.867991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.867998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.868007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.868014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.868023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.868031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.868084] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13ff230 was disconnected and freed. reset controller. 00:23:45.869 [2024-07-16 00:00:00.870491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:45.869 [2024-07-16 00:00:00.870950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.869 [2024-07-16 00:00:00.870969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e6970 with addr=10.0.0.2, port=4420 00:23:45.869 [2024-07-16 00:00:00.870977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e6970 is same with the state(5) to be set 00:23:45.869 [2024-07-16 00:00:00.871500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.869 [2024-07-16 00:00:00.871538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc9d0 with addr=10.0.0.2, port=4420 00:23:45.869 [2024-07-16 00:00:00.871550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc9d0 is same with the state(5) to be set 00:23:45.869 [2024-07-16 00:00:00.871735] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:45.869 [2024-07-16 00:00:00.871775] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:45.869 [2024-07-16 00:00:00.871821] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:45.869 [2024-07-16 00:00:00.871904] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:45.869 [2024-07-16 00:00:00.872208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.869 [2024-07-16 00:00:00.872225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147dc10 with addr=10.0.0.2, port=4420 00:23:45.869 [2024-07-16 00:00:00.872240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147dc10 is same with the state(5) to be set 00:23:45.869 [2024-07-16 00:00:00.872252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e6970 (9): Bad file descriptor 00:23:45.869 [2024-07-16 00:00:00.872262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc9d0 (9): Bad file descriptor 00:23:45.869 [2024-07-16 00:00:00.872311] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:45.869 [2024-07-16 00:00:00.872350] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:45.869 [2024-07-16 00:00:00.872647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.872979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.872986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.869 [2024-07-16 00:00:00.873528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.869 [2024-07-16 00:00:00.873535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.873738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.873746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a6220 is same with the state(5) to be set 00:23:45.870 [2024-07-16 00:00:00.873805] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12a6220 was disconnected and freed. reset controller. 00:23:45.870 [2024-07-16 00:00:00.873872] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:45.870 [2024-07-16 00:00:00.873900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147dc10 (9): Bad file descriptor 00:23:45.870 [2024-07-16 00:00:00.873910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:45.870 [2024-07-16 00:00:00.873917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:45.870 [2024-07-16 00:00:00.873925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:45.870 [2024-07-16 00:00:00.873939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:45.870 [2024-07-16 00:00:00.873946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:45.870 [2024-07-16 00:00:00.873952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:45.870 [2024-07-16 00:00:00.875207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.870 [2024-07-16 00:00:00.875220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.870 [2024-07-16 00:00:00.875228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:45.870 [2024-07-16 00:00:00.875256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:45.870 [2024-07-16 00:00:00.875264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:45.870 [2024-07-16 00:00:00.875276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:45.870 [2024-07-16 00:00:00.875333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.870 [2024-07-16 00:00:00.875823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.870 [2024-07-16 00:00:00.875859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ecbc0 with addr=10.0.0.2, port=4420 00:23:45.870 [2024-07-16 00:00:00.875872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecbc0 is same with the state(5) to be set 00:23:45.870 [2024-07-16 00:00:00.876190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ecbc0 (9): Bad file descriptor 00:23:45.870 [2024-07-16 00:00:00.876267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:45.870 [2024-07-16 00:00:00.876276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:45.870 [2024-07-16 00:00:00.876284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:45.870 [2024-07-16 00:00:00.876332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.870 [2024-07-16 00:00:00.876613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145ee20 (9): Bad file descriptor 00:23:45.870 [2024-07-16 00:00:00.876729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.876986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.876995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.877003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.877012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.877019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.877029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.877036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.877045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.877052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.877062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.877069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.877078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.870 [2024-07-16 00:00:00.877087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.870 [2024-07-16 00:00:00.877096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.871 [2024-07-16 00:00:00.877726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.871 [2024-07-16 00:00:00.877736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.877743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.877753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.877760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.877769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.877776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.877785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.877792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.877801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.877809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.877817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362420 is same with the state(5) to be set 00:23:45.872 [2024-07-16 00:00:00.879096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.872 [2024-07-16 00:00:00.879739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.872 [2024-07-16 00:00:00.879748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.879990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.879999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.880190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.880198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fde60 is same with the state(5) to be set 00:23:45.873 [2024-07-16 00:00:00.881462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.873 [2024-07-16 00:00:00.881663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.873 [2024-07-16 00:00:00.881673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.881983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.881993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.874 [2024-07-16 00:00:00.882251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.874 [2024-07-16 00:00:00.882258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.882562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.882570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a6a40 is same with the state(5) to be set 00:23:45.875 [2024-07-16 00:00:00.883850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.883863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.883878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.883887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.883898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.883910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.883922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.883931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.883941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.883948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.883957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.883964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.883974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.883981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.883991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.883998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.875 [2024-07-16 00:00:00.884273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.875 [2024-07-16 00:00:00.884282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.884948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.884956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14027c0 is same with the state(5) to be set 00:23:45.876 [2024-07-16 00:00:00.886228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.886253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.886267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.876 [2024-07-16 00:00:00.886276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.876 [2024-07-16 00:00:00.886287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.886984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.886991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.877 [2024-07-16 00:00:00.887000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.877 [2024-07-16 00:00:00.887007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.887334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.887342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1404f70 is same with the state(5) to be set 00:23:45.878 [2024-07-16 00:00:00.888829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.878 [2024-07-16 00:00:00.888853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:45.878 [2024-07-16 00:00:00.888863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:45.878 [2024-07-16 00:00:00.888873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:45.878 [2024-07-16 00:00:00.888953] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:45.878 [2024-07-16 00:00:00.889034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:45.878 [2024-07-16 00:00:00.889579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.878 [2024-07-16 00:00:00.889618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12aa5d0 with addr=10.0.0.2, port=4420 00:23:45.878 [2024-07-16 00:00:00.889629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12aa5d0 is same with the state(5) to be set 00:23:45.878 [2024-07-16 00:00:00.889878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.878 [2024-07-16 00:00:00.889890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1475fd0 with addr=10.0.0.2, port=4420 00:23:45.878 [2024-07-16 00:00:00.889898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475fd0 is same with the state(5) to be set 00:23:45.878 [2024-07-16 00:00:00.890457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.878 [2024-07-16 00:00:00.890495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdad610 with addr=10.0.0.2, port=4420 00:23:45.878 [2024-07-16 00:00:00.890507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdad610 is same with the state(5) to be set 00:23:45.878 [2024-07-16 00:00:00.890743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.878 [2024-07-16 00:00:00.890754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1448cb0 with addr=10.0.0.2, port=4420 00:23:45.878 [2024-07-16 00:00:00.890762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1448cb0 is same with the state(5) to be set 00:23:45.878 [2024-07-16 00:00:00.891845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.891860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.891875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.891883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.891893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.891900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.891910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.891917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.891931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.891938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.891948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.891954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.891964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.891971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.891980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.891987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.891997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.878 [2024-07-16 00:00:00.892004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.878 [2024-07-16 00:00:00.892014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.879 [2024-07-16 00:00:00.892724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.879 [2024-07-16 00:00:00.892732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.880 [2024-07-16 00:00:00.892936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.880 [2024-07-16 00:00:00.892944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403a70 is same with the state(5) to be set 00:23:45.880 [2024-07-16 00:00:00.894708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:45.880 [2024-07-16 00:00:00.894733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:45.880 [2024-07-16 00:00:00.894742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:45.880 [2024-07-16 00:00:00.894751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:45.880 task offset: 27392 on job bdev=Nvme4n1 fails 00:23:45.880 00:23:45.880 Latency(us) 00:23:45.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.880 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme1n1 : 0.96 133.99 8.37 66.99 0.00 314966.76 20097.71 279620.27 00:23:45.880 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme2n1 : 0.96 133.65 8.35 66.83 0.00 309359.22 22828.37 251658.24 00:23:45.880 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme3n1 ended in about 0.95 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme3n1 : 0.95 202.97 12.69 67.66 0.00 224106.67 21845.33 255153.49 00:23:45.880 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme4n1 ended in about 0.94 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme4n1 : 0.94 203.94 12.75 67.98 0.00 218133.33 22282.24 270882.13 00:23:45.880 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme5n1 ended in about 0.95 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme5n1 : 0.95 201.79 12.61 67.26 0.00 215700.32 4259.84 221948.59 00:23:45.880 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme6n1 ended in about 0.96 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme6n1 : 0.96 133.32 8.33 66.66 0.00 284070.68 20643.84 286610.77 00:23:45.880 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme7n1 ended in about 0.94 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme7n1 : 0.94 203.66 12.73 67.89 0.00 203768.11 21954.56 249910.61 00:23:45.880 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme8n1 ended in about 0.96 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme8n1 : 0.96 199.49 12.47 66.50 0.00 203949.23 17039.36 255153.49 00:23:45.880 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme9n1 : 0.97 131.90 8.24 65.95 0.00 268205.51 18568.53 249910.61 00:23:45.880 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.880 Job: Nvme10n1 ended in about 0.96 seconds with error 00:23:45.880 Verification LBA range: start 0x0 length 0x400 00:23:45.880 Nvme10n1 : 0.96 132.67 8.29 66.33 0.00 260074.67 15510.19 274377.39 00:23:45.880 =================================================================================================================== 00:23:45.880 Total : 1677.38 104.84 670.05 0.00 244933.17 4259.84 286610.77 00:23:45.880 [2024-07-16 00:00:00.920091] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:45.880 [2024-07-16 00:00:00.920141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:45.880 [2024-07-16 00:00:00.920516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.880 [2024-07-16 00:00:00.920536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1475000 with addr=10.0.0.2, port=4420 00:23:45.880 [2024-07-16 00:00:00.920547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1475000 is same with the state(5) to be set 00:23:45.880 [2024-07-16 00:00:00.920561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12aa5d0 (9): Bad file descriptor 00:23:45.880 [2024-07-16 00:00:00.920588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1475fd0 (9): Bad file descriptor 00:23:45.880 [2024-07-16 00:00:00.920598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad610 (9): Bad file descriptor 00:23:45.880 [2024-07-16 00:00:00.920607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1448cb0 (9): Bad file descriptor 00:23:45.880 [2024-07-16 00:00:00.920822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.880 [2024-07-16 00:00:00.920836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12cc9d0 with addr=10.0.0.2, port=4420 00:23:45.880 [2024-07-16 00:00:00.920843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cc9d0 is same with the state(5) to be set 00:23:45.880 [2024-07-16 00:00:00.921011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.880 [2024-07-16 00:00:00.921021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e6970 with addr=10.0.0.2, port=4420 00:23:45.880 [2024-07-16 00:00:00.921028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e6970 is same with the state(5) to be set 00:23:45.880 [2024-07-16 00:00:00.921393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.880 [2024-07-16 00:00:00.921404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147dc10 with addr=10.0.0.2, port=4420 00:23:45.880 [2024-07-16 00:00:00.921412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147dc10 is same with the state(5) to be set 00:23:45.880 [2024-07-16 00:00:00.921757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.880 [2024-07-16 00:00:00.921767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ecbc0 with addr=10.0.0.2, port=4420 00:23:45.880 [2024-07-16 00:00:00.921774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ecbc0 is same with the state(5) to be set 00:23:45.880 [2024-07-16 00:00:00.922165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.880 [2024-07-16 00:00:00.922174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145ee20 with addr=10.0.0.2, port=4420 00:23:45.880 [2024-07-16 00:00:00.922181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145ee20 is same with the state(5) to be set 00:23:45.880 [2024-07-16 00:00:00.922190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1475000 (9): Bad file descriptor 00:23:45.880 [2024-07-16 00:00:00.922199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.880 [2024-07-16 00:00:00.922206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.880 [2024-07-16 00:00:00.922214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.880 [2024-07-16 00:00:00.922228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:45.880 [2024-07-16 00:00:00.922240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:45.880 [2024-07-16 00:00:00.922247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:45.880 [2024-07-16 00:00:00.922257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:45.880 [2024-07-16 00:00:00.922263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:45.880 [2024-07-16 00:00:00.922269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:45.880 [2024-07-16 00:00:00.922279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:45.880 [2024-07-16 00:00:00.922286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:45.881 [2024-07-16 00:00:00.922296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:45.881 [2024-07-16 00:00:00.922323] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:45.881 [2024-07-16 00:00:00.922334] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:45.881 [2024-07-16 00:00:00.922345] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:45.881 [2024-07-16 00:00:00.922356] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:45.881 [2024-07-16 00:00:00.922366] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:45.881 [2024-07-16 00:00:00.922700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.881 [2024-07-16 00:00:00.922710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.881 [2024-07-16 00:00:00.922716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.881 [2024-07-16 00:00:00.922722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.881 [2024-07-16 00:00:00.922730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc9d0 (9): Bad file descriptor 00:23:45.881 [2024-07-16 00:00:00.922739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e6970 (9): Bad file descriptor 00:23:45.881 [2024-07-16 00:00:00.922748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147dc10 (9): Bad file descriptor 00:23:45.881 [2024-07-16 00:00:00.922756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ecbc0 (9): Bad file descriptor 00:23:45.881 [2024-07-16 00:00:00.922765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145ee20 (9): Bad file descriptor 00:23:45.881 [2024-07-16 00:00:00.922773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:45.881 [2024-07-16 00:00:00.922780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:45.881 [2024-07-16 00:00:00.922786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:45.881 [2024-07-16 00:00:00.923058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.881 [2024-07-16 00:00:00.923068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:45.881 [2024-07-16 00:00:00.923074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:45.881 [2024-07-16 00:00:00.923081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:45.881 [2024-07-16 00:00:00.923090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:45.881 [2024-07-16 00:00:00.923096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:45.881 [2024-07-16 00:00:00.923103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:45.881 [2024-07-16 00:00:00.923112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:45.881 [2024-07-16 00:00:00.923118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:45.881 [2024-07-16 00:00:00.923124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:45.881 [2024-07-16 00:00:00.923134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:45.881 [2024-07-16 00:00:00.923140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:45.881 [2024-07-16 00:00:00.923149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:45.881 [2024-07-16 00:00:00.923158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:45.881 [2024-07-16 00:00:00.923165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:45.881 [2024-07-16 00:00:00.923171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:45.881 [2024-07-16 00:00:00.923205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.881 [2024-07-16 00:00:00.923212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.881 [2024-07-16 00:00:00.923219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.881 [2024-07-16 00:00:00.923224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.881 [2024-07-16 00:00:00.923236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.143 00:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:46.143 00:00:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 542508 00:23:47.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (542508) - No such process 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:47.079 rmmod nvme_tcp 00:23:47.079 rmmod nvme_fabrics 00:23:47.079 rmmod nvme_keyring 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.079 00:00:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.668 00:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.668 00:23:49.668 real 0m7.705s 00:23:49.668 user 0m18.512s 00:23:49.668 sys 0m1.263s 00:23:49.668 00:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:49.668 00:00:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:49.668 ************************************ 00:23:49.668 END TEST nvmf_shutdown_tc3 00:23:49.668 ************************************ 00:23:49.668 00:00:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:23:49.668 00:00:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:49.668 00:23:49.668 real 0m33.318s 00:23:49.668 user 1m15.731s 00:23:49.668 sys 0m9.968s 00:23:49.668 00:00:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:49.668 00:00:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:49.668 ************************************ 00:23:49.668 END TEST nvmf_shutdown 00:23:49.668 ************************************ 00:23:49.668 00:00:04 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:23:49.668 00:00:04 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:49.668 00:00:04 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:49.668 00:00:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.668 00:00:04 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:49.668 00:00:04 nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:49.668 00:00:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.668 00:00:04 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:49.668 00:00:04 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:49.668 00:00:04 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:23:49.668 00:00:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:49.668 00:00:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.668 ************************************ 00:23:49.668 START TEST nvmf_multicontroller 00:23:49.668 ************************************ 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:49.668 * Looking for test storage... 00:23:49.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.668 00:00:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:49.669 00:00:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:57.807 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:57.807 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:57.807 Found net devices under 0000:31:00.0: cvl_0_0 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:57.807 Found net devices under 0000:31:00.1: cvl_0_1 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:23:57.807 00:23:57.807 --- 10.0.0.2 ping statistics --- 00:23:57.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.807 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:23:57.807 00:23:57.807 --- 10.0.0.1 ping statistics --- 00:23:57.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.807 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=548476 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 548476 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@823 -- # '[' -z 548476 ']' 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:57.807 00:00:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.807 [2024-07-16 00:00:12.865350] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:23:57.807 [2024-07-16 00:00:12.865414] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.807 [2024-07-16 00:00:12.964221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:58.066 [2024-07-16 00:00:13.058527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.066 [2024-07-16 00:00:13.058588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.066 [2024-07-16 00:00:13.058597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.066 [2024-07-16 00:00:13.058603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.066 [2024-07-16 00:00:13.058610] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.066 [2024-07-16 00:00:13.058744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.066 [2024-07-16 00:00:13.058908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.066 [2024-07-16 00:00:13.058908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # return 0 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.643 [2024-07-16 00:00:13.676302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.643 Malloc0 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.643 [2024-07-16 00:00:13.750839] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.643 [2024-07-16 00:00:13.762779] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:58.643 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.644 Malloc1 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.644 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=548825 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 548825 /var/tmp/bdevperf.sock 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@823 -- # '[' -z 548825 ']' 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:58.903 00:00:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.473 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:59.473 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # return 0 00:23:59.473 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:59.473 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.473 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.733 NVMe0n1 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.733 1 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.733 request: 00:23:59.733 { 00:23:59.733 "name": "NVMe0", 00:23:59.733 "trtype": "tcp", 00:23:59.733 "traddr": "10.0.0.2", 00:23:59.733 "adrfam": "ipv4", 00:23:59.733 "trsvcid": "4420", 00:23:59.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.733 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:59.733 "hostaddr": "10.0.0.2", 00:23:59.733 "hostsvcid": "60000", 00:23:59.733 "prchk_reftag": false, 00:23:59.733 "prchk_guard": false, 00:23:59.733 "hdgst": false, 00:23:59.733 "ddgst": false, 00:23:59.733 "method": "bdev_nvme_attach_controller", 00:23:59.733 "req_id": 1 00:23:59.733 } 00:23:59.733 Got JSON-RPC error response 00:23:59.733 response: 00:23:59.733 { 00:23:59.733 "code": -114, 00:23:59.733 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:59.733 } 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.733 request: 00:23:59.733 { 00:23:59.733 "name": "NVMe0", 00:23:59.733 "trtype": "tcp", 00:23:59.733 "traddr": "10.0.0.2", 00:23:59.733 "adrfam": "ipv4", 00:23:59.733 "trsvcid": "4420", 00:23:59.733 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:59.733 "hostaddr": "10.0.0.2", 00:23:59.733 "hostsvcid": "60000", 00:23:59.733 "prchk_reftag": false, 00:23:59.733 "prchk_guard": false, 00:23:59.733 "hdgst": false, 00:23:59.733 "ddgst": false, 00:23:59.733 "method": "bdev_nvme_attach_controller", 00:23:59.733 "req_id": 1 00:23:59.733 } 00:23:59.733 Got JSON-RPC error response 00:23:59.733 response: 00:23:59.733 { 00:23:59.733 "code": -114, 00:23:59.733 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:59.733 } 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.733 request: 00:23:59.733 { 00:23:59.733 "name": "NVMe0", 00:23:59.733 "trtype": "tcp", 00:23:59.733 "traddr": "10.0.0.2", 00:23:59.733 "adrfam": "ipv4", 00:23:59.733 "trsvcid": "4420", 00:23:59.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.733 "hostaddr": "10.0.0.2", 00:23:59.733 "hostsvcid": "60000", 00:23:59.733 "prchk_reftag": false, 00:23:59.733 "prchk_guard": false, 00:23:59.733 "hdgst": false, 00:23:59.733 "ddgst": false, 00:23:59.733 "multipath": "disable", 00:23:59.733 "method": "bdev_nvme_attach_controller", 00:23:59.733 "req_id": 1 00:23:59.733 } 00:23:59.733 Got JSON-RPC error response 00:23:59.733 response: 00:23:59.733 { 00:23:59.733 "code": -114, 00:23:59.733 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:59.733 } 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.733 request: 00:23:59.733 { 00:23:59.733 "name": "NVMe0", 00:23:59.733 "trtype": "tcp", 00:23:59.733 "traddr": "10.0.0.2", 00:23:59.733 "adrfam": "ipv4", 00:23:59.733 "trsvcid": "4420", 00:23:59.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.733 "hostaddr": "10.0.0.2", 00:23:59.733 "hostsvcid": "60000", 00:23:59.733 "prchk_reftag": false, 00:23:59.733 "prchk_guard": false, 00:23:59.733 "hdgst": false, 00:23:59.733 "ddgst": false, 00:23:59.733 "multipath": "failover", 00:23:59.733 "method": "bdev_nvme_attach_controller", 00:23:59.733 "req_id": 1 00:23:59.733 } 00:23:59.733 Got JSON-RPC error response 00:23:59.733 response: 00:23:59.733 { 00:23:59.733 "code": -114, 00:23:59.733 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:59.733 } 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.733 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.733 00:00:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.993 00:23:59.993 00:00:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.993 00:00:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:59.993 00:00:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:59.993 00:00:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.993 00:00:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:59.993 00:00:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.993 00:00:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:59.993 00:00:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.375 0 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 548825 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@942 -- # '[' -z 548825 ']' 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # kill -0 548825 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # uname 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 548825 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@960 -- # echo 'killing process with pid 548825' 00:24:01.375 killing process with pid 548825 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@961 -- # kill 548825 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # wait 548825 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1606 -- # read -r file 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # sort -u 00:24:01.375 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # cat 00:24:01.375 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:01.375 [2024-07-16 00:00:13.882904] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:24:01.375 [2024-07-16 00:00:13.882963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548825 ] 00:24:01.375 [2024-07-16 00:00:13.949249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.375 [2024-07-16 00:00:14.013954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.375 [2024-07-16 00:00:15.146103] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 87d504d4-25c4-4c3c-b50b-86c870c91688 already exists 00:24:01.375 [2024-07-16 00:00:15.146134] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:87d504d4-25c4-4c3c-b50b-86c870c91688 alias for bdev NVMe1n1 00:24:01.375 [2024-07-16 00:00:15.146142] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:01.375 Running I/O for 1 seconds... 00:24:01.375 00:24:01.375 Latency(us) 00:24:01.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.375 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:01.375 NVMe0n1 : 1.00 29394.63 114.82 0.00 0.00 4343.76 2075.31 7809.71 00:24:01.375 =================================================================================================================== 00:24:01.375 Total : 29394.63 114.82 0.00 0.00 4343.76 2075.31 7809.71 00:24:01.375 Received shutdown signal, test time was about 1.000000 seconds 00:24:01.375 00:24:01.375 Latency(us) 00:24:01.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.376 =================================================================================================================== 00:24:01.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.376 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:01.376 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.376 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1606 -- # read -r file 00:24:01.376 00:00:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:01.376 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.376 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:01.376 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.376 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:01.376 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.376 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.376 rmmod nvme_tcp 00:24:01.376 rmmod nvme_fabrics 00:24:01.636 rmmod nvme_keyring 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 548476 ']' 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 548476 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@942 -- # '[' -z 548476 ']' 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # kill -0 548476 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # uname 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 548476 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@960 -- # echo 'killing process with pid 548476' 00:24:01.636 killing process with pid 548476 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@961 -- # kill 548476 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # wait 548476 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.636 00:00:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.177 00:00:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.177 00:24:04.177 real 0m14.438s 00:24:04.177 user 0m16.386s 00:24:04.177 sys 0m6.938s 00:24:04.177 00:00:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:04.177 00:00:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.177 ************************************ 00:24:04.177 END TEST nvmf_multicontroller 00:24:04.177 ************************************ 00:24:04.177 00:00:18 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:04.177 00:00:18 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:04.177 00:00:18 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:04.177 00:00:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:04.177 00:00:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.177 ************************************ 00:24:04.177 START TEST nvmf_aer 00:24:04.177 ************************************ 00:24:04.177 00:00:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:04.177 * Looking for test storage... 00:24:04.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.178 00:00:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:12.317 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:12.317 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:12.317 Found net devices under 0000:31:00.0: cvl_0_0 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:12.317 Found net devices under 0000:31:00.1: cvl_0_1 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.317 00:00:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.317 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.317 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.317 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:12.317 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.317 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.317 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.317 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:12.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:24:12.317 00:24:12.317 --- 10.0.0.2 ping statistics --- 00:24:12.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.317 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:24:12.317 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:24:12.318 00:24:12.318 --- 10.0.0.1 ping statistics --- 00:24:12.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.318 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=553971 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 553971 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@823 -- # '[' -z 553971 ']' 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:12.318 00:00:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:12.318 [2024-07-16 00:00:27.404514] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:24:12.318 [2024-07-16 00:00:27.404580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.318 [2024-07-16 00:00:27.484373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.577 [2024-07-16 00:00:27.560046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.577 [2024-07-16 00:00:27.560087] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.577 [2024-07-16 00:00:27.560095] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.577 [2024-07-16 00:00:27.560104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.577 [2024-07-16 00:00:27.560109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.577 [2024-07-16 00:00:27.560263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.577 [2024-07-16 00:00:27.560391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.577 [2024-07-16 00:00:27.560539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.577 [2024-07-16 00:00:27.560540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # return 0 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.146 [2024-07-16 00:00:28.232799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.146 Malloc0 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.146 [2024-07-16 00:00:28.292243] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.146 [ 00:24:13.146 { 00:24:13.146 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:13.146 "subtype": "Discovery", 00:24:13.146 "listen_addresses": [], 00:24:13.146 "allow_any_host": true, 00:24:13.146 "hosts": [] 00:24:13.146 }, 00:24:13.146 { 00:24:13.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.146 "subtype": "NVMe", 00:24:13.146 "listen_addresses": [ 00:24:13.146 { 00:24:13.146 "trtype": "TCP", 00:24:13.146 "adrfam": "IPv4", 00:24:13.146 "traddr": "10.0.0.2", 00:24:13.146 "trsvcid": "4420" 00:24:13.146 } 00:24:13.146 ], 00:24:13.146 "allow_any_host": true, 00:24:13.146 "hosts": [], 00:24:13.146 "serial_number": "SPDK00000000000001", 00:24:13.146 "model_number": "SPDK bdev Controller", 00:24:13.146 "max_namespaces": 2, 00:24:13.146 "min_cntlid": 1, 00:24:13.146 "max_cntlid": 65519, 00:24:13.146 "namespaces": [ 00:24:13.146 { 00:24:13.146 "nsid": 1, 00:24:13.146 "bdev_name": "Malloc0", 00:24:13.146 "name": "Malloc0", 00:24:13.146 "nguid": "0802536EAE074760AB799A45196A77D8", 00:24:13.146 "uuid": "0802536e-ae07-4760-ab79-9a45196a77d8" 00:24:13.146 } 00:24:13.146 ] 00:24:13.146 } 00:24:13.146 ] 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=554234 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1259 -- # local i=0 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 0 -lt 200 ']' 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # i=1 00:24:13.146 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 1 -lt 200 ']' 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # i=2 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1270 -- # return 0 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.406 Malloc1 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.406 [ 00:24:13.406 { 00:24:13.406 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:13.406 "subtype": "Discovery", 00:24:13.406 "listen_addresses": [], 00:24:13.406 "allow_any_host": true, 00:24:13.406 "hosts": [] 00:24:13.406 }, 00:24:13.406 { 00:24:13.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.406 "subtype": "NVMe", 00:24:13.406 "listen_addresses": [ 00:24:13.406 { 00:24:13.406 "trtype": "TCP", 00:24:13.406 "adrfam": "IPv4", 00:24:13.406 "traddr": "10.0.0.2", 00:24:13.406 "trsvcid": "4420" 00:24:13.406 } 00:24:13.406 ], 00:24:13.406 "allow_any_host": true, 00:24:13.406 "hosts": [], 00:24:13.406 "serial_number": "SPDK00000000000001", 00:24:13.406 "model_number": "SPDK bdev Controller", 00:24:13.406 "max_namespaces": 2, 00:24:13.406 "min_cntlid": 1, 00:24:13.406 "max_cntlid": 65519, 00:24:13.406 "namespaces": [ 00:24:13.406 { 00:24:13.406 "nsid": 1, 00:24:13.406 "bdev_name": "Malloc0", 00:24:13.406 "name": "Malloc0", 00:24:13.406 "nguid": "0802536EAE074760AB799A45196A77D8", 00:24:13.406 "uuid": "0802536e-ae07-4760-ab79-9a45196a77d8" 00:24:13.406 }, 00:24:13.406 { 00:24:13.406 "nsid": 2, 00:24:13.406 "bdev_name": "Malloc1", 00:24:13.406 "name": "Malloc1", 00:24:13.406 "nguid": "489F76E779D6443F9981883D836CA2F6", 00:24:13.406 "uuid": "489f76e7-79d6-443f-9981-883d836ca2f6" 00:24:13.406 } 00:24:13.406 ] 00:24:13.406 Asynchronous Event Request test 00:24:13.406 Attaching to 10.0.0.2 00:24:13.406 Attached to 10.0.0.2 00:24:13.406 Registering asynchronous event callbacks... 00:24:13.406 Starting namespace attribute notice tests for all controllers... 00:24:13.406 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:13.406 aer_cb - Changed Namespace 00:24:13.406 Cleaning up... 00:24:13.406 } 00:24:13.406 ] 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 554234 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.406 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:13.666 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:13.667 rmmod nvme_tcp 00:24:13.667 rmmod nvme_fabrics 00:24:13.667 rmmod nvme_keyring 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 553971 ']' 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 553971 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@942 -- # '[' -z 553971 ']' 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # kill -0 553971 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # uname 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 553971 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@960 -- # echo 'killing process with pid 553971' 00:24:13.667 killing process with pid 553971 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@961 -- # kill 553971 00:24:13.667 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # wait 553971 00:24:13.927 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:13.927 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:13.927 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:13.927 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.927 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:13.927 00:00:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.928 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.928 00:00:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.840 00:00:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:15.840 00:24:15.840 real 0m12.020s 00:24:15.840 user 0m7.779s 00:24:15.840 sys 0m6.482s 00:24:15.840 00:00:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:15.840 00:00:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:15.840 ************************************ 00:24:15.840 END TEST nvmf_aer 00:24:15.840 ************************************ 00:24:15.840 00:00:31 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:15.840 00:00:31 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:15.840 00:00:31 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:15.840 00:00:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:15.840 00:00:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:16.102 ************************************ 00:24:16.102 START TEST nvmf_async_init 00:24:16.102 ************************************ 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:16.102 * Looking for test storage... 00:24:16.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e66daed503b44934908810f3226277b0 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.102 00:00:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:24.238 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:24.238 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:24.238 Found net devices under 0000:31:00.0: cvl_0_0 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:24.238 Found net devices under 0000:31:00.1: cvl_0_1 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.238 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:24:24.500 00:24:24.500 --- 10.0.0.2 ping statistics --- 00:24:24.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.500 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:24:24.500 00:24:24.500 --- 10.0.0.1 ping statistics --- 00:24:24.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.500 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=558940 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 558940 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@823 -- # '[' -z 558940 ']' 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:24.500 00:00:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:24.500 [2024-07-16 00:00:39.606002] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:24:24.500 [2024-07-16 00:00:39.606068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.500 [2024-07-16 00:00:39.685670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.761 [2024-07-16 00:00:39.759368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.762 [2024-07-16 00:00:39.759410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.762 [2024-07-16 00:00:39.759419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.762 [2024-07-16 00:00:39.759425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.762 [2024-07-16 00:00:39.759431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.762 [2024-07-16 00:00:39.759448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # return 0 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.333 [2024-07-16 00:00:40.414273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.333 null0 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e66daed503b44934908810f3226277b0 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.333 [2024-07-16 00:00:40.474522] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.333 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.592 nvme0n1 00:24:25.592 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.592 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.592 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.592 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.592 [ 00:24:25.592 { 00:24:25.592 "name": "nvme0n1", 00:24:25.592 "aliases": [ 00:24:25.592 "e66daed5-03b4-4934-9088-10f3226277b0" 00:24:25.592 ], 00:24:25.592 "product_name": "NVMe disk", 00:24:25.592 "block_size": 512, 00:24:25.592 "num_blocks": 2097152, 00:24:25.592 "uuid": "e66daed5-03b4-4934-9088-10f3226277b0", 00:24:25.592 "assigned_rate_limits": { 00:24:25.592 "rw_ios_per_sec": 0, 00:24:25.592 "rw_mbytes_per_sec": 0, 00:24:25.592 "r_mbytes_per_sec": 0, 00:24:25.592 "w_mbytes_per_sec": 0 00:24:25.592 }, 00:24:25.592 "claimed": false, 00:24:25.592 "zoned": false, 00:24:25.592 "supported_io_types": { 00:24:25.592 "read": true, 00:24:25.592 "write": true, 00:24:25.592 "unmap": false, 00:24:25.592 "flush": true, 00:24:25.592 "reset": true, 00:24:25.592 "nvme_admin": true, 00:24:25.592 "nvme_io": true, 00:24:25.592 "nvme_io_md": false, 00:24:25.592 "write_zeroes": true, 00:24:25.592 "zcopy": false, 00:24:25.592 "get_zone_info": false, 00:24:25.592 "zone_management": false, 00:24:25.592 "zone_append": false, 00:24:25.592 "compare": true, 00:24:25.592 "compare_and_write": true, 00:24:25.592 "abort": true, 00:24:25.592 "seek_hole": false, 00:24:25.592 "seek_data": false, 00:24:25.592 "copy": true, 00:24:25.592 "nvme_iov_md": false 00:24:25.592 }, 00:24:25.592 "memory_domains": [ 00:24:25.592 { 00:24:25.592 "dma_device_id": "system", 00:24:25.592 "dma_device_type": 1 00:24:25.592 } 00:24:25.592 ], 00:24:25.592 "driver_specific": { 00:24:25.592 "nvme": [ 00:24:25.592 { 00:24:25.592 "trid": { 00:24:25.592 "trtype": "TCP", 00:24:25.592 "adrfam": "IPv4", 00:24:25.592 "traddr": "10.0.0.2", 00:24:25.592 "trsvcid": "4420", 00:24:25.592 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.592 }, 00:24:25.592 "ctrlr_data": { 00:24:25.592 "cntlid": 1, 00:24:25.592 "vendor_id": "0x8086", 00:24:25.592 "model_number": "SPDK bdev Controller", 00:24:25.592 "serial_number": "00000000000000000000", 00:24:25.592 "firmware_revision": "24.09", 00:24:25.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.592 "oacs": { 00:24:25.592 "security": 0, 00:24:25.592 "format": 0, 00:24:25.592 "firmware": 0, 00:24:25.592 "ns_manage": 0 00:24:25.592 }, 00:24:25.592 "multi_ctrlr": true, 00:24:25.592 "ana_reporting": false 00:24:25.592 }, 00:24:25.592 "vs": { 00:24:25.592 "nvme_version": "1.3" 00:24:25.592 }, 00:24:25.592 "ns_data": { 00:24:25.592 "id": 1, 00:24:25.592 "can_share": true 00:24:25.592 } 00:24:25.592 } 00:24:25.592 ], 00:24:25.592 "mp_policy": "active_passive" 00:24:25.592 } 00:24:25.592 } 00:24:25.592 ] 00:24:25.592 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.592 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:25.592 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.592 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.592 [2024-07-16 00:00:40.747079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:25.592 [2024-07-16 00:00:40.747137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12879f0 (9): Bad file descriptor 00:24:25.852 [2024-07-16 00:00:40.889330] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:25.852 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.852 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:25.852 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.852 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.852 [ 00:24:25.852 { 00:24:25.852 "name": "nvme0n1", 00:24:25.852 "aliases": [ 00:24:25.852 "e66daed5-03b4-4934-9088-10f3226277b0" 00:24:25.852 ], 00:24:25.852 "product_name": "NVMe disk", 00:24:25.852 "block_size": 512, 00:24:25.852 "num_blocks": 2097152, 00:24:25.852 "uuid": "e66daed5-03b4-4934-9088-10f3226277b0", 00:24:25.852 "assigned_rate_limits": { 00:24:25.852 "rw_ios_per_sec": 0, 00:24:25.852 "rw_mbytes_per_sec": 0, 00:24:25.852 "r_mbytes_per_sec": 0, 00:24:25.852 "w_mbytes_per_sec": 0 00:24:25.852 }, 00:24:25.852 "claimed": false, 00:24:25.852 "zoned": false, 00:24:25.852 "supported_io_types": { 00:24:25.852 "read": true, 00:24:25.852 "write": true, 00:24:25.852 "unmap": false, 00:24:25.852 "flush": true, 00:24:25.852 "reset": true, 00:24:25.852 "nvme_admin": true, 00:24:25.852 "nvme_io": true, 00:24:25.852 "nvme_io_md": false, 00:24:25.852 "write_zeroes": true, 00:24:25.852 "zcopy": false, 00:24:25.852 "get_zone_info": false, 00:24:25.852 "zone_management": false, 00:24:25.852 "zone_append": false, 00:24:25.852 "compare": true, 00:24:25.852 "compare_and_write": true, 00:24:25.852 "abort": true, 00:24:25.852 "seek_hole": false, 00:24:25.852 "seek_data": false, 00:24:25.852 "copy": true, 00:24:25.853 "nvme_iov_md": false 00:24:25.853 }, 00:24:25.853 "memory_domains": [ 00:24:25.853 { 00:24:25.853 "dma_device_id": "system", 00:24:25.853 "dma_device_type": 1 00:24:25.853 } 00:24:25.853 ], 00:24:25.853 "driver_specific": { 00:24:25.853 "nvme": [ 00:24:25.853 { 00:24:25.853 "trid": { 00:24:25.853 "trtype": "TCP", 00:24:25.853 "adrfam": "IPv4", 00:24:25.853 "traddr": "10.0.0.2", 00:24:25.853 "trsvcid": "4420", 00:24:25.853 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:25.853 }, 00:24:25.853 "ctrlr_data": { 00:24:25.853 "cntlid": 2, 00:24:25.853 "vendor_id": "0x8086", 00:24:25.853 "model_number": "SPDK bdev Controller", 00:24:25.853 "serial_number": "00000000000000000000", 00:24:25.853 "firmware_revision": "24.09", 00:24:25.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.853 "oacs": { 00:24:25.853 "security": 0, 00:24:25.853 "format": 0, 00:24:25.853 "firmware": 0, 00:24:25.853 "ns_manage": 0 00:24:25.853 }, 00:24:25.853 "multi_ctrlr": true, 00:24:25.853 "ana_reporting": false 00:24:25.853 }, 00:24:25.853 "vs": { 00:24:25.853 "nvme_version": "1.3" 00:24:25.853 }, 00:24:25.853 "ns_data": { 00:24:25.853 "id": 1, 00:24:25.853 "can_share": true 00:24:25.853 } 00:24:25.853 } 00:24:25.853 ], 00:24:25.853 "mp_policy": "active_passive" 00:24:25.853 } 00:24:25.853 } 00:24:25.853 ] 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.O2AK0rY43H 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.O2AK0rY43H 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.853 [2024-07-16 00:00:40.963756] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:25.853 [2024-07-16 00:00:40.963860] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O2AK0rY43H 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.853 [2024-07-16 00:00:40.975777] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.O2AK0rY43H 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:25.853 00:00:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:25.853 [2024-07-16 00:00:40.987825] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.853 [2024-07-16 00:00:40.987861] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:26.113 nvme0n1 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.113 [ 00:24:26.113 { 00:24:26.113 "name": "nvme0n1", 00:24:26.113 "aliases": [ 00:24:26.113 "e66daed5-03b4-4934-9088-10f3226277b0" 00:24:26.113 ], 00:24:26.113 "product_name": "NVMe disk", 00:24:26.113 "block_size": 512, 00:24:26.113 "num_blocks": 2097152, 00:24:26.113 "uuid": "e66daed5-03b4-4934-9088-10f3226277b0", 00:24:26.113 "assigned_rate_limits": { 00:24:26.113 "rw_ios_per_sec": 0, 00:24:26.113 "rw_mbytes_per_sec": 0, 00:24:26.113 "r_mbytes_per_sec": 0, 00:24:26.113 "w_mbytes_per_sec": 0 00:24:26.113 }, 00:24:26.113 "claimed": false, 00:24:26.113 "zoned": false, 00:24:26.113 "supported_io_types": { 00:24:26.113 "read": true, 00:24:26.113 "write": true, 00:24:26.113 "unmap": false, 00:24:26.113 "flush": true, 00:24:26.113 "reset": true, 00:24:26.113 "nvme_admin": true, 00:24:26.113 "nvme_io": true, 00:24:26.113 "nvme_io_md": false, 00:24:26.113 "write_zeroes": true, 00:24:26.113 "zcopy": false, 00:24:26.113 "get_zone_info": false, 00:24:26.113 "zone_management": false, 00:24:26.113 "zone_append": false, 00:24:26.113 "compare": true, 00:24:26.113 "compare_and_write": true, 00:24:26.113 "abort": true, 00:24:26.113 "seek_hole": false, 00:24:26.113 "seek_data": false, 00:24:26.113 "copy": true, 00:24:26.113 "nvme_iov_md": false 00:24:26.113 }, 00:24:26.113 "memory_domains": [ 00:24:26.113 { 00:24:26.113 "dma_device_id": "system", 00:24:26.113 "dma_device_type": 1 00:24:26.113 } 00:24:26.113 ], 00:24:26.113 "driver_specific": { 00:24:26.113 "nvme": [ 00:24:26.113 { 00:24:26.113 "trid": { 00:24:26.113 "trtype": "TCP", 00:24:26.113 "adrfam": "IPv4", 00:24:26.113 "traddr": "10.0.0.2", 00:24:26.113 "trsvcid": "4421", 00:24:26.113 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:26.113 }, 00:24:26.113 "ctrlr_data": { 00:24:26.113 "cntlid": 3, 00:24:26.113 "vendor_id": "0x8086", 00:24:26.113 "model_number": "SPDK bdev Controller", 00:24:26.113 "serial_number": "00000000000000000000", 00:24:26.113 "firmware_revision": "24.09", 00:24:26.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:26.113 "oacs": { 00:24:26.113 "security": 0, 00:24:26.113 "format": 0, 00:24:26.113 "firmware": 0, 00:24:26.113 "ns_manage": 0 00:24:26.113 }, 00:24:26.113 "multi_ctrlr": true, 00:24:26.113 "ana_reporting": false 00:24:26.113 }, 00:24:26.113 "vs": { 00:24:26.113 "nvme_version": "1.3" 00:24:26.113 }, 00:24:26.113 "ns_data": { 00:24:26.113 "id": 1, 00:24:26.113 "can_share": true 00:24:26.113 } 00:24:26.113 } 00:24:26.113 ], 00:24:26.113 "mp_policy": "active_passive" 00:24:26.113 } 00:24:26.113 } 00:24:26.113 ] 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.O2AK0rY43H 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:26.113 rmmod nvme_tcp 00:24:26.113 rmmod nvme_fabrics 00:24:26.113 rmmod nvme_keyring 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 558940 ']' 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 558940 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@942 -- # '[' -z 558940 ']' 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # kill -0 558940 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # uname 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 558940 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 558940' 00:24:26.113 killing process with pid 558940 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@961 -- # kill 558940 00:24:26.113 [2024-07-16 00:00:41.249377] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:26.113 [2024-07-16 00:00:41.249404] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:26.113 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # wait 558940 00:24:26.374 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.374 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.374 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.374 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.374 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.374 00:00:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.374 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.374 00:00:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.292 00:00:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.292 00:24:28.292 real 0m12.391s 00:24:28.292 user 0m4.380s 00:24:28.292 sys 0m6.462s 00:24:28.292 00:00:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:28.292 00:00:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.292 ************************************ 00:24:28.292 END TEST nvmf_async_init 00:24:28.292 ************************************ 00:24:28.557 00:00:43 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:28.557 00:00:43 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:28.557 00:00:43 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:28.557 00:00:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:28.557 00:00:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.557 ************************************ 00:24:28.557 START TEST dma 00:24:28.557 ************************************ 00:24:28.557 00:00:43 nvmf_tcp.dma -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:28.557 * Looking for test storage... 00:24:28.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.557 00:00:43 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.557 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:28.557 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.557 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.557 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.557 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.557 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.557 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.557 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.558 00:00:43 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.558 00:00:43 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.558 00:00:43 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.558 00:00:43 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.558 00:00:43 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.558 00:00:43 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.558 00:00:43 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:28.558 00:00:43 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.558 00:00:43 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.558 00:00:43 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:28.558 00:00:43 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:28.558 00:24:28.558 real 0m0.134s 00:24:28.558 user 0m0.062s 00:24:28.558 sys 0m0.081s 00:24:28.558 00:00:43 nvmf_tcp.dma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:28.558 00:00:43 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:28.558 ************************************ 00:24:28.558 END TEST dma 00:24:28.558 ************************************ 00:24:28.558 00:00:43 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:28.558 00:00:43 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:28.558 00:00:43 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:28.558 00:00:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:28.558 00:00:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.558 ************************************ 00:24:28.558 START TEST nvmf_identify 00:24:28.558 ************************************ 00:24:28.558 00:00:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:28.819 * Looking for test storage... 00:24:28.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.819 00:00:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:36.961 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:36.961 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:36.961 Found net devices under 0000:31:00.0: cvl_0_0 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:36.961 Found net devices under 0000:31:00.1: cvl_0_1 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.961 00:00:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.961 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.961 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:36.962 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.962 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.962 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.962 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:24:37.222 00:24:37.222 --- 10.0.0.2 ping statistics --- 00:24:37.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.222 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:24:37.222 00:24:37.222 --- 10.0.0.1 ping statistics --- 00:24:37.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.222 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=564057 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 564057 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@823 -- # '[' -z 564057 ']' 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:37.222 00:00:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:37.222 [2024-07-16 00:00:52.263894] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:24:37.222 [2024-07-16 00:00:52.263982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.222 [2024-07-16 00:00:52.345359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.483 [2024-07-16 00:00:52.421294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.483 [2024-07-16 00:00:52.421337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.483 [2024-07-16 00:00:52.421345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.483 [2024-07-16 00:00:52.421351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.483 [2024-07-16 00:00:52.421356] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.483 [2024-07-16 00:00:52.421536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.483 [2024-07-16 00:00:52.421688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.483 [2024-07-16 00:00:52.421842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.483 [2024-07-16 00:00:52.421843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # return 0 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 [2024-07-16 00:00:53.042685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 Malloc0 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 [2024-07-16 00:00:53.142227] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 [ 00:24:38.053 { 00:24:38.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:38.053 "subtype": "Discovery", 00:24:38.053 "listen_addresses": [ 00:24:38.053 { 00:24:38.053 "trtype": "TCP", 00:24:38.053 "adrfam": "IPv4", 00:24:38.053 "traddr": "10.0.0.2", 00:24:38.053 "trsvcid": "4420" 00:24:38.053 } 00:24:38.053 ], 00:24:38.053 "allow_any_host": true, 00:24:38.053 "hosts": [] 00:24:38.053 }, 00:24:38.053 { 00:24:38.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.053 "subtype": "NVMe", 00:24:38.053 "listen_addresses": [ 00:24:38.053 { 00:24:38.053 "trtype": "TCP", 00:24:38.053 "adrfam": "IPv4", 00:24:38.053 "traddr": "10.0.0.2", 00:24:38.053 "trsvcid": "4420" 00:24:38.053 } 00:24:38.053 ], 00:24:38.053 "allow_any_host": true, 00:24:38.053 "hosts": [], 00:24:38.053 "serial_number": "SPDK00000000000001", 00:24:38.053 "model_number": "SPDK bdev Controller", 00:24:38.053 "max_namespaces": 32, 00:24:38.053 "min_cntlid": 1, 00:24:38.053 "max_cntlid": 65519, 00:24:38.053 "namespaces": [ 00:24:38.053 { 00:24:38.053 "nsid": 1, 00:24:38.053 "bdev_name": "Malloc0", 00:24:38.053 "name": "Malloc0", 00:24:38.053 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:38.053 "eui64": "ABCDEF0123456789", 00:24:38.053 "uuid": "fb5d11fa-2e38-45ea-8ea8-fc3030cf5d70" 00:24:38.053 } 00:24:38.053 ] 00:24:38.053 } 00:24:38.053 ] 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.053 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:38.053 [2024-07-16 00:00:53.204917] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:24:38.053 [2024-07-16 00:00:53.204958] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid564331 ] 00:24:38.053 [2024-07-16 00:00:53.238903] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:38.053 [2024-07-16 00:00:53.238957] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:38.053 [2024-07-16 00:00:53.238962] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:38.053 [2024-07-16 00:00:53.238974] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:38.053 [2024-07-16 00:00:53.238980] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:38.053 [2024-07-16 00:00:53.239463] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:38.053 [2024-07-16 00:00:53.239493] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1919ec0 0 00:24:38.317 [2024-07-16 00:00:53.250240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:38.317 [2024-07-16 00:00:53.250255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:38.317 [2024-07-16 00:00:53.250260] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:38.317 [2024-07-16 00:00:53.250263] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:38.317 [2024-07-16 00:00:53.250302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.250308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.250312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.317 [2024-07-16 00:00:53.250326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:38.317 [2024-07-16 00:00:53.250343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.317 [2024-07-16 00:00:53.258239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.317 [2024-07-16 00:00:53.258248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.317 [2024-07-16 00:00:53.258252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.258257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.317 [2024-07-16 00:00:53.258268] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:38.317 [2024-07-16 00:00:53.258275] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:38.317 [2024-07-16 00:00:53.258284] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:38.317 [2024-07-16 00:00:53.258297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.258301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.258305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.317 [2024-07-16 00:00:53.258313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.317 [2024-07-16 00:00:53.258325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.317 [2024-07-16 00:00:53.258542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.317 [2024-07-16 00:00:53.258549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.317 [2024-07-16 00:00:53.258552] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.258556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.317 [2024-07-16 00:00:53.258561] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:38.317 [2024-07-16 00:00:53.258569] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:38.317 [2024-07-16 00:00:53.258575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.258579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.258583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.317 [2024-07-16 00:00:53.258589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.317 [2024-07-16 00:00:53.258600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.317 [2024-07-16 00:00:53.258824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.317 [2024-07-16 00:00:53.258830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.317 [2024-07-16 00:00:53.258834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.258838] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.317 [2024-07-16 00:00:53.258843] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:38.317 [2024-07-16 00:00:53.258851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:38.317 [2024-07-16 00:00:53.258858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.258861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.258865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.317 [2024-07-16 00:00:53.258872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.317 [2024-07-16 00:00:53.258882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.317 [2024-07-16 00:00:53.259136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.317 [2024-07-16 00:00:53.259142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.317 [2024-07-16 00:00:53.259145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.259150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.317 [2024-07-16 00:00:53.259155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:38.317 [2024-07-16 00:00:53.259166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.259170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.259174] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.317 [2024-07-16 00:00:53.259181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.317 [2024-07-16 00:00:53.259190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.317 [2024-07-16 00:00:53.259410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.317 [2024-07-16 00:00:53.259417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.317 [2024-07-16 00:00:53.259421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.259424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.317 [2024-07-16 00:00:53.259429] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:38.317 [2024-07-16 00:00:53.259434] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:38.317 [2024-07-16 00:00:53.259441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:38.317 [2024-07-16 00:00:53.259547] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:38.317 [2024-07-16 00:00:53.259552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:38.317 [2024-07-16 00:00:53.259560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.259564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.317 [2024-07-16 00:00:53.259568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.317 [2024-07-16 00:00:53.259574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.317 [2024-07-16 00:00:53.259585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.317 [2024-07-16 00:00:53.259791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.317 [2024-07-16 00:00:53.259798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.317 [2024-07-16 00:00:53.259801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.259805] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.318 [2024-07-16 00:00:53.259810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:38.318 [2024-07-16 00:00:53.259819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.259823] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.259826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.259833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.318 [2024-07-16 00:00:53.259842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.318 [2024-07-16 00:00:53.260054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.318 [2024-07-16 00:00:53.260060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.318 [2024-07-16 00:00:53.260064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.318 [2024-07-16 00:00:53.260073] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:38.318 [2024-07-16 00:00:53.260079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:38.318 [2024-07-16 00:00:53.260087] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:38.318 [2024-07-16 00:00:53.260099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:38.318 [2024-07-16 00:00:53.260109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.260119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.318 [2024-07-16 00:00:53.260130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.318 [2024-07-16 00:00:53.260357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.318 [2024-07-16 00:00:53.260364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.318 [2024-07-16 00:00:53.260368] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260372] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919ec0): datao=0, datal=4096, cccid=0 00:24:38.318 [2024-07-16 00:00:53.260377] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199ce40) on tqpair(0x1919ec0): expected_datao=0, payload_size=4096 00:24:38.318 [2024-07-16 00:00:53.260382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260390] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260394] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.318 [2024-07-16 00:00:53.260588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.318 [2024-07-16 00:00:53.260592] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.318 [2024-07-16 00:00:53.260603] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:38.318 [2024-07-16 00:00:53.260611] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:38.318 [2024-07-16 00:00:53.260617] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:38.318 [2024-07-16 00:00:53.260622] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:38.318 [2024-07-16 00:00:53.260627] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:38.318 [2024-07-16 00:00:53.260631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:38.318 [2024-07-16 00:00:53.260639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:38.318 [2024-07-16 00:00:53.260646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.260661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.318 [2024-07-16 00:00:53.260672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.318 [2024-07-16 00:00:53.260881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.318 [2024-07-16 00:00:53.260887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.318 [2024-07-16 00:00:53.260891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.318 [2024-07-16 00:00:53.260903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.260917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.318 [2024-07-16 00:00:53.260923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.260936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.318 [2024-07-16 00:00:53.260942] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260946] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.260955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.318 [2024-07-16 00:00:53.260961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.260974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.318 [2024-07-16 00:00:53.260979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:38.318 [2024-07-16 00:00:53.260989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:38.318 [2024-07-16 00:00:53.260995] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.260999] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.261006] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.318 [2024-07-16 00:00:53.261017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ce40, cid 0, qid 0 00:24:38.318 [2024-07-16 00:00:53.261022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199cfc0, cid 1, qid 0 00:24:38.318 [2024-07-16 00:00:53.261027] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d140, cid 2, qid 0 00:24:38.318 [2024-07-16 00:00:53.261032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.318 [2024-07-16 00:00:53.261037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d440, cid 4, qid 0 00:24:38.318 [2024-07-16 00:00:53.261262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.318 [2024-07-16 00:00:53.261269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.318 [2024-07-16 00:00:53.261273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.261277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d440) on tqpair=0x1919ec0 00:24:38.318 [2024-07-16 00:00:53.261284] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:38.318 [2024-07-16 00:00:53.261289] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:38.318 [2024-07-16 00:00:53.261299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.261303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.261309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.318 [2024-07-16 00:00:53.261319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d440, cid 4, qid 0 00:24:38.318 [2024-07-16 00:00:53.261512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.318 [2024-07-16 00:00:53.261518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.318 [2024-07-16 00:00:53.261522] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.261526] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919ec0): datao=0, datal=4096, cccid=4 00:24:38.318 [2024-07-16 00:00:53.261530] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199d440) on tqpair(0x1919ec0): expected_datao=0, payload_size=4096 00:24:38.318 [2024-07-16 00:00:53.261534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.261563] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.261567] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.318 [2024-07-16 00:00:53.265245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.318 [2024-07-16 00:00:53.265249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d440) on tqpair=0x1919ec0 00:24:38.318 [2024-07-16 00:00:53.265265] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:38.318 [2024-07-16 00:00:53.265288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.265299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.318 [2024-07-16 00:00:53.265306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1919ec0) 00:24:38.318 [2024-07-16 00:00:53.265320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.318 [2024-07-16 00:00:53.265335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d440, cid 4, qid 0 00:24:38.318 [2024-07-16 00:00:53.265340] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d5c0, cid 5, qid 0 00:24:38.318 [2024-07-16 00:00:53.265594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.318 [2024-07-16 00:00:53.265600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.318 [2024-07-16 00:00:53.265604] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265608] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919ec0): datao=0, datal=1024, cccid=4 00:24:38.318 [2024-07-16 00:00:53.265612] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199d440) on tqpair(0x1919ec0): expected_datao=0, payload_size=1024 00:24:38.318 [2024-07-16 00:00:53.265617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265626] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265629] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.318 [2024-07-16 00:00:53.265641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.318 [2024-07-16 00:00:53.265645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.318 [2024-07-16 00:00:53.265648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d5c0) on tqpair=0x1919ec0 00:24:38.319 [2024-07-16 00:00:53.307426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.319 [2024-07-16 00:00:53.307435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.319 [2024-07-16 00:00:53.307438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.307442] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d440) on tqpair=0x1919ec0 00:24:38.319 [2024-07-16 00:00:53.307459] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.307464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919ec0) 00:24:38.319 [2024-07-16 00:00:53.307470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.319 [2024-07-16 00:00:53.307484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d440, cid 4, qid 0 00:24:38.319 [2024-07-16 00:00:53.307666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.319 [2024-07-16 00:00:53.307672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.319 [2024-07-16 00:00:53.307676] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.307679] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919ec0): datao=0, datal=3072, cccid=4 00:24:38.319 [2024-07-16 00:00:53.307684] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199d440) on tqpair(0x1919ec0): expected_datao=0, payload_size=3072 00:24:38.319 [2024-07-16 00:00:53.307688] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.307763] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.307767] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.307904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.319 [2024-07-16 00:00:53.307910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.319 [2024-07-16 00:00:53.307914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.307918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d440) on tqpair=0x1919ec0 00:24:38.319 [2024-07-16 00:00:53.307926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.307930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1919ec0) 00:24:38.319 [2024-07-16 00:00:53.307936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.319 [2024-07-16 00:00:53.307949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d440, cid 4, qid 0 00:24:38.319 [2024-07-16 00:00:53.308180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.319 [2024-07-16 00:00:53.308187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.319 [2024-07-16 00:00:53.308190] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.308194] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1919ec0): datao=0, datal=8, cccid=4 00:24:38.319 [2024-07-16 00:00:53.308198] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199d440) on tqpair(0x1919ec0): expected_datao=0, payload_size=8 00:24:38.319 [2024-07-16 00:00:53.308202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.308209] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.308215] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.349429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.319 [2024-07-16 00:00:53.349441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.319 [2024-07-16 00:00:53.349444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.319 [2024-07-16 00:00:53.349448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d440) on tqpair=0x1919ec0 00:24:38.319 ===================================================== 00:24:38.319 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:38.319 ===================================================== 00:24:38.319 Controller Capabilities/Features 00:24:38.319 ================================ 00:24:38.319 Vendor ID: 0000 00:24:38.319 Subsystem Vendor ID: 0000 00:24:38.319 Serial Number: .................... 00:24:38.319 Model Number: ........................................ 00:24:38.319 Firmware Version: 24.09 00:24:38.319 Recommended Arb Burst: 0 00:24:38.319 IEEE OUI Identifier: 00 00 00 00:24:38.319 Multi-path I/O 00:24:38.319 May have multiple subsystem ports: No 00:24:38.319 May have multiple controllers: No 00:24:38.319 Associated with SR-IOV VF: No 00:24:38.319 Max Data Transfer Size: 131072 00:24:38.319 Max Number of Namespaces: 0 00:24:38.319 Max Number of I/O Queues: 1024 00:24:38.319 NVMe Specification Version (VS): 1.3 00:24:38.319 NVMe Specification Version (Identify): 1.3 00:24:38.319 Maximum Queue Entries: 128 00:24:38.319 Contiguous Queues Required: Yes 00:24:38.319 Arbitration Mechanisms Supported 00:24:38.319 Weighted Round Robin: Not Supported 00:24:38.319 Vendor Specific: Not Supported 00:24:38.319 Reset Timeout: 15000 ms 00:24:38.319 Doorbell Stride: 4 bytes 00:24:38.319 NVM Subsystem Reset: Not Supported 00:24:38.319 Command Sets Supported 00:24:38.319 NVM Command Set: Supported 00:24:38.319 Boot Partition: Not Supported 00:24:38.319 Memory Page Size Minimum: 4096 bytes 00:24:38.319 Memory Page Size Maximum: 4096 bytes 00:24:38.319 Persistent Memory Region: Not Supported 00:24:38.319 Optional Asynchronous Events Supported 00:24:38.319 Namespace Attribute Notices: Not Supported 00:24:38.319 Firmware Activation Notices: Not Supported 00:24:38.319 ANA Change Notices: Not Supported 00:24:38.319 PLE Aggregate Log Change Notices: Not Supported 00:24:38.319 LBA Status Info Alert Notices: Not Supported 00:24:38.319 EGE Aggregate Log Change Notices: Not Supported 00:24:38.319 Normal NVM Subsystem Shutdown event: Not Supported 00:24:38.319 Zone Descriptor Change Notices: Not Supported 00:24:38.319 Discovery Log Change Notices: Supported 00:24:38.319 Controller Attributes 00:24:38.319 128-bit Host Identifier: Not Supported 00:24:38.319 Non-Operational Permissive Mode: Not Supported 00:24:38.319 NVM Sets: Not Supported 00:24:38.319 Read Recovery Levels: Not Supported 00:24:38.319 Endurance Groups: Not Supported 00:24:38.319 Predictable Latency Mode: Not Supported 00:24:38.319 Traffic Based Keep ALive: Not Supported 00:24:38.319 Namespace Granularity: Not Supported 00:24:38.319 SQ Associations: Not Supported 00:24:38.319 UUID List: Not Supported 00:24:38.319 Multi-Domain Subsystem: Not Supported 00:24:38.319 Fixed Capacity Management: Not Supported 00:24:38.319 Variable Capacity Management: Not Supported 00:24:38.319 Delete Endurance Group: Not Supported 00:24:38.319 Delete NVM Set: Not Supported 00:24:38.319 Extended LBA Formats Supported: Not Supported 00:24:38.319 Flexible Data Placement Supported: Not Supported 00:24:38.319 00:24:38.319 Controller Memory Buffer Support 00:24:38.319 ================================ 00:24:38.319 Supported: No 00:24:38.319 00:24:38.319 Persistent Memory Region Support 00:24:38.319 ================================ 00:24:38.319 Supported: No 00:24:38.319 00:24:38.319 Admin Command Set Attributes 00:24:38.319 ============================ 00:24:38.319 Security Send/Receive: Not Supported 00:24:38.319 Format NVM: Not Supported 00:24:38.319 Firmware Activate/Download: Not Supported 00:24:38.319 Namespace Management: Not Supported 00:24:38.319 Device Self-Test: Not Supported 00:24:38.319 Directives: Not Supported 00:24:38.319 NVMe-MI: Not Supported 00:24:38.319 Virtualization Management: Not Supported 00:24:38.319 Doorbell Buffer Config: Not Supported 00:24:38.319 Get LBA Status Capability: Not Supported 00:24:38.319 Command & Feature Lockdown Capability: Not Supported 00:24:38.319 Abort Command Limit: 1 00:24:38.319 Async Event Request Limit: 4 00:24:38.319 Number of Firmware Slots: N/A 00:24:38.319 Firmware Slot 1 Read-Only: N/A 00:24:38.319 Firmware Activation Without Reset: N/A 00:24:38.319 Multiple Update Detection Support: N/A 00:24:38.319 Firmware Update Granularity: No Information Provided 00:24:38.319 Per-Namespace SMART Log: No 00:24:38.319 Asymmetric Namespace Access Log Page: Not Supported 00:24:38.319 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:38.319 Command Effects Log Page: Not Supported 00:24:38.319 Get Log Page Extended Data: Supported 00:24:38.319 Telemetry Log Pages: Not Supported 00:24:38.319 Persistent Event Log Pages: Not Supported 00:24:38.319 Supported Log Pages Log Page: May Support 00:24:38.319 Commands Supported & Effects Log Page: Not Supported 00:24:38.319 Feature Identifiers & Effects Log Page:May Support 00:24:38.319 NVMe-MI Commands & Effects Log Page: May Support 00:24:38.319 Data Area 4 for Telemetry Log: Not Supported 00:24:38.319 Error Log Page Entries Supported: 128 00:24:38.319 Keep Alive: Not Supported 00:24:38.319 00:24:38.319 NVM Command Set Attributes 00:24:38.319 ========================== 00:24:38.319 Submission Queue Entry Size 00:24:38.319 Max: 1 00:24:38.319 Min: 1 00:24:38.319 Completion Queue Entry Size 00:24:38.319 Max: 1 00:24:38.319 Min: 1 00:24:38.319 Number of Namespaces: 0 00:24:38.319 Compare Command: Not Supported 00:24:38.319 Write Uncorrectable Command: Not Supported 00:24:38.319 Dataset Management Command: Not Supported 00:24:38.319 Write Zeroes Command: Not Supported 00:24:38.319 Set Features Save Field: Not Supported 00:24:38.319 Reservations: Not Supported 00:24:38.319 Timestamp: Not Supported 00:24:38.319 Copy: Not Supported 00:24:38.319 Volatile Write Cache: Not Present 00:24:38.319 Atomic Write Unit (Normal): 1 00:24:38.319 Atomic Write Unit (PFail): 1 00:24:38.319 Atomic Compare & Write Unit: 1 00:24:38.319 Fused Compare & Write: Supported 00:24:38.319 Scatter-Gather List 00:24:38.319 SGL Command Set: Supported 00:24:38.319 SGL Keyed: Supported 00:24:38.319 SGL Bit Bucket Descriptor: Not Supported 00:24:38.319 SGL Metadata Pointer: Not Supported 00:24:38.319 Oversized SGL: Not Supported 00:24:38.319 SGL Metadata Address: Not Supported 00:24:38.319 SGL Offset: Supported 00:24:38.319 Transport SGL Data Block: Not Supported 00:24:38.319 Replay Protected Memory Block: Not Supported 00:24:38.319 00:24:38.320 Firmware Slot Information 00:24:38.320 ========================= 00:24:38.320 Active slot: 0 00:24:38.320 00:24:38.320 00:24:38.320 Error Log 00:24:38.320 ========= 00:24:38.320 00:24:38.320 Active Namespaces 00:24:38.320 ================= 00:24:38.320 Discovery Log Page 00:24:38.320 ================== 00:24:38.320 Generation Counter: 2 00:24:38.320 Number of Records: 2 00:24:38.320 Record Format: 0 00:24:38.320 00:24:38.320 Discovery Log Entry 0 00:24:38.320 ---------------------- 00:24:38.320 Transport Type: 3 (TCP) 00:24:38.320 Address Family: 1 (IPv4) 00:24:38.320 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:38.320 Entry Flags: 00:24:38.320 Duplicate Returned Information: 1 00:24:38.320 Explicit Persistent Connection Support for Discovery: 1 00:24:38.320 Transport Requirements: 00:24:38.320 Secure Channel: Not Required 00:24:38.320 Port ID: 0 (0x0000) 00:24:38.320 Controller ID: 65535 (0xffff) 00:24:38.320 Admin Max SQ Size: 128 00:24:38.320 Transport Service Identifier: 4420 00:24:38.320 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:38.320 Transport Address: 10.0.0.2 00:24:38.320 Discovery Log Entry 1 00:24:38.320 ---------------------- 00:24:38.320 Transport Type: 3 (TCP) 00:24:38.320 Address Family: 1 (IPv4) 00:24:38.320 Subsystem Type: 2 (NVM Subsystem) 00:24:38.320 Entry Flags: 00:24:38.320 Duplicate Returned Information: 0 00:24:38.320 Explicit Persistent Connection Support for Discovery: 0 00:24:38.320 Transport Requirements: 00:24:38.320 Secure Channel: Not Required 00:24:38.320 Port ID: 0 (0x0000) 00:24:38.320 Controller ID: 65535 (0xffff) 00:24:38.320 Admin Max SQ Size: 128 00:24:38.320 Transport Service Identifier: 4420 00:24:38.320 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:38.320 Transport Address: 10.0.0.2 [2024-07-16 00:00:53.349536] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:38.320 [2024-07-16 00:00:53.349547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ce40) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.349554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.320 [2024-07-16 00:00:53.349559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199cfc0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.349564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.320 [2024-07-16 00:00:53.349569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d140) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.349574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.320 [2024-07-16 00:00:53.349578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.349583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.320 [2024-07-16 00:00:53.349594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.349598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.349602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.320 [2024-07-16 00:00:53.349610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.320 [2024-07-16 00:00:53.349623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.320 [2024-07-16 00:00:53.349732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.320 [2024-07-16 00:00:53.349738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.320 [2024-07-16 00:00:53.349742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.349745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.349753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.349756] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.349760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.320 [2024-07-16 00:00:53.349767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.320 [2024-07-16 00:00:53.349779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.320 [2024-07-16 00:00:53.349963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.320 [2024-07-16 00:00:53.349970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.320 [2024-07-16 00:00:53.349973] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.349977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.349982] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:38.320 [2024-07-16 00:00:53.349987] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:38.320 [2024-07-16 00:00:53.349998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.320 [2024-07-16 00:00:53.350013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.320 [2024-07-16 00:00:53.350023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.320 [2024-07-16 00:00:53.350222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.320 [2024-07-16 00:00:53.350233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.320 [2024-07-16 00:00:53.350237] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.350251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.320 [2024-07-16 00:00:53.350265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.320 [2024-07-16 00:00:53.350275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.320 [2024-07-16 00:00:53.350456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.320 [2024-07-16 00:00:53.350462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.320 [2024-07-16 00:00:53.350466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.350479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.320 [2024-07-16 00:00:53.350493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.320 [2024-07-16 00:00:53.350502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.320 [2024-07-16 00:00:53.350720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.320 [2024-07-16 00:00:53.350726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.320 [2024-07-16 00:00:53.350730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.350743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.320 [2024-07-16 00:00:53.350757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.320 [2024-07-16 00:00:53.350767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.320 [2024-07-16 00:00:53.350975] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.320 [2024-07-16 00:00:53.350981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.320 [2024-07-16 00:00:53.350985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.350989] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.350998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.351004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.351007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.320 [2024-07-16 00:00:53.351014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.320 [2024-07-16 00:00:53.351024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.320 [2024-07-16 00:00:53.351202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.320 [2024-07-16 00:00:53.351208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.320 [2024-07-16 00:00:53.351211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.351215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.351225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.351228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.351235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.320 [2024-07-16 00:00:53.351242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.320 [2024-07-16 00:00:53.351252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.320 [2024-07-16 00:00:53.351470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.320 [2024-07-16 00:00:53.351476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.320 [2024-07-16 00:00:53.351480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.351483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.320 [2024-07-16 00:00:53.351493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.351497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.320 [2024-07-16 00:00:53.351500] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.321 [2024-07-16 00:00:53.351507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.321 [2024-07-16 00:00:53.351516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.321 [2024-07-16 00:00:53.351745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.321 [2024-07-16 00:00:53.351752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.321 [2024-07-16 00:00:53.351755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.351759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.321 [2024-07-16 00:00:53.351769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.351773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.351776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.321 [2024-07-16 00:00:53.351783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.321 [2024-07-16 00:00:53.351792] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.321 [2024-07-16 00:00:53.351970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.321 [2024-07-16 00:00:53.351976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.321 [2024-07-16 00:00:53.351979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.351983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.321 [2024-07-16 00:00:53.351993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.351996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.352002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.321 [2024-07-16 00:00:53.352009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.321 [2024-07-16 00:00:53.352018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.321 [2024-07-16 00:00:53.356235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.321 [2024-07-16 00:00:53.356243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.321 [2024-07-16 00:00:53.356247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.356251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.321 [2024-07-16 00:00:53.356261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.356264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.356268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1919ec0) 00:24:38.321 [2024-07-16 00:00:53.356275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.321 [2024-07-16 00:00:53.356286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199d2c0, cid 3, qid 0 00:24:38.321 [2024-07-16 00:00:53.356481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.321 [2024-07-16 00:00:53.356487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.321 [2024-07-16 00:00:53.356491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.356495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199d2c0) on tqpair=0x1919ec0 00:24:38.321 [2024-07-16 00:00:53.356502] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:38.321 00:24:38.321 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:38.321 [2024-07-16 00:00:53.397123] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:24:38.321 [2024-07-16 00:00:53.397193] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid564336 ] 00:24:38.321 [2024-07-16 00:00:53.434347] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:38.321 [2024-07-16 00:00:53.434394] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:38.321 [2024-07-16 00:00:53.434399] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:38.321 [2024-07-16 00:00:53.434411] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:38.321 [2024-07-16 00:00:53.434416] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:38.321 [2024-07-16 00:00:53.434695] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:38.321 [2024-07-16 00:00:53.434726] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6ddec0 0 00:24:38.321 [2024-07-16 00:00:53.441237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:38.321 [2024-07-16 00:00:53.441247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:38.321 [2024-07-16 00:00:53.441251] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:38.321 [2024-07-16 00:00:53.441254] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:38.321 [2024-07-16 00:00:53.441287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.441293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.441297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.321 [2024-07-16 00:00:53.441308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:38.321 [2024-07-16 00:00:53.441324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.321 [2024-07-16 00:00:53.449239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.321 [2024-07-16 00:00:53.449248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.321 [2024-07-16 00:00:53.449251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.449256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.321 [2024-07-16 00:00:53.449264] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:38.321 [2024-07-16 00:00:53.449270] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:38.321 [2024-07-16 00:00:53.449275] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:38.321 [2024-07-16 00:00:53.449287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.449291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.449295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.321 [2024-07-16 00:00:53.449303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.321 [2024-07-16 00:00:53.449315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.321 [2024-07-16 00:00:53.449389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.321 [2024-07-16 00:00:53.449396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.321 [2024-07-16 00:00:53.449400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.449404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.321 [2024-07-16 00:00:53.449408] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:38.321 [2024-07-16 00:00:53.449415] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:38.321 [2024-07-16 00:00:53.449422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.449426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.449429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.321 [2024-07-16 00:00:53.449436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.321 [2024-07-16 00:00:53.449446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.321 [2024-07-16 00:00:53.449512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.321 [2024-07-16 00:00:53.449519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.321 [2024-07-16 00:00:53.449523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.449526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.321 [2024-07-16 00:00:53.449531] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:38.321 [2024-07-16 00:00:53.449539] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:38.321 [2024-07-16 00:00:53.449545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.449552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.321 [2024-07-16 00:00:53.449555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.321 [2024-07-16 00:00:53.449562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.321 [2024-07-16 00:00:53.449572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.321 [2024-07-16 00:00:53.449640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.322 [2024-07-16 00:00:53.449647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.322 [2024-07-16 00:00:53.449650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.449654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.322 [2024-07-16 00:00:53.449659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:38.322 [2024-07-16 00:00:53.449668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.449671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.449675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.449681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.322 [2024-07-16 00:00:53.449691] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.322 [2024-07-16 00:00:53.449751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.322 [2024-07-16 00:00:53.449758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.322 [2024-07-16 00:00:53.449761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.449765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.322 [2024-07-16 00:00:53.449769] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:38.322 [2024-07-16 00:00:53.449774] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:38.322 [2024-07-16 00:00:53.449781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:38.322 [2024-07-16 00:00:53.449887] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:38.322 [2024-07-16 00:00:53.449890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:38.322 [2024-07-16 00:00:53.449898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.449902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.449905] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.449912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.322 [2024-07-16 00:00:53.449922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.322 [2024-07-16 00:00:53.449992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.322 [2024-07-16 00:00:53.449999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.322 [2024-07-16 00:00:53.450002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.450006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.322 [2024-07-16 00:00:53.450010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:38.322 [2024-07-16 00:00:53.450019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.450025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.450028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.450035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.322 [2024-07-16 00:00:53.450045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.322 [2024-07-16 00:00:53.450108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.322 [2024-07-16 00:00:53.450115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.322 [2024-07-16 00:00:53.450118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.450122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.322 [2024-07-16 00:00:53.450126] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:38.322 [2024-07-16 00:00:53.450131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:38.322 [2024-07-16 00:00:53.450138] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:38.322 [2024-07-16 00:00:53.450145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:38.322 [2024-07-16 00:00:53.450154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.450158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.450165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.322 [2024-07-16 00:00:53.450175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.322 [2024-07-16 00:00:53.450274] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.322 [2024-07-16 00:00:53.450284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.322 [2024-07-16 00:00:53.450291] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.450295] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ddec0): datao=0, datal=4096, cccid=0 00:24:38.322 [2024-07-16 00:00:53.450299] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x760e40) on tqpair(0x6ddec0): expected_datao=0, payload_size=4096 00:24:38.322 [2024-07-16 00:00:53.450303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.450332] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.450336] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.322 [2024-07-16 00:00:53.491305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.322 [2024-07-16 00:00:53.491309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.322 [2024-07-16 00:00:53.491321] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:38.322 [2024-07-16 00:00:53.491329] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:38.322 [2024-07-16 00:00:53.491333] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:38.322 [2024-07-16 00:00:53.491337] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:38.322 [2024-07-16 00:00:53.491341] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:38.322 [2024-07-16 00:00:53.491348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:38.322 [2024-07-16 00:00:53.491357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:38.322 [2024-07-16 00:00:53.491364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491371] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.491378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.322 [2024-07-16 00:00:53.491391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.322 [2024-07-16 00:00:53.491462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.322 [2024-07-16 00:00:53.491468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.322 [2024-07-16 00:00:53.491472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.322 [2024-07-16 00:00:53.491482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.491495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.322 [2024-07-16 00:00:53.491502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.491514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.322 [2024-07-16 00:00:53.491520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.491533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.322 [2024-07-16 00:00:53.491539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.491552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.322 [2024-07-16 00:00:53.491556] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:38.322 [2024-07-16 00:00:53.491567] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:38.322 [2024-07-16 00:00:53.491573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ddec0) 00:24:38.322 [2024-07-16 00:00:53.491583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.322 [2024-07-16 00:00:53.491595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760e40, cid 0, qid 0 00:24:38.322 [2024-07-16 00:00:53.491603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x760fc0, cid 1, qid 0 00:24:38.322 [2024-07-16 00:00:53.491611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x761140, cid 2, qid 0 00:24:38.322 [2024-07-16 00:00:53.491617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.322 [2024-07-16 00:00:53.491622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x761440, cid 4, qid 0 00:24:38.322 [2024-07-16 00:00:53.491703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.322 [2024-07-16 00:00:53.491710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.322 [2024-07-16 00:00:53.491713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.322 [2024-07-16 00:00:53.491717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x761440) on tqpair=0x6ddec0 00:24:38.322 [2024-07-16 00:00:53.491721] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:38.322 [2024-07-16 00:00:53.491726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.491734] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.491740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.491746] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.491750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.491753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ddec0) 00:24:38.323 [2024-07-16 00:00:53.491760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:38.323 [2024-07-16 00:00:53.491770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x761440, cid 4, qid 0 00:24:38.323 [2024-07-16 00:00:53.491839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.323 [2024-07-16 00:00:53.491846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.323 [2024-07-16 00:00:53.491849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.491853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x761440) on tqpair=0x6ddec0 00:24:38.323 [2024-07-16 00:00:53.491916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.491925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.491932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.491936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ddec0) 00:24:38.323 [2024-07-16 00:00:53.491942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.323 [2024-07-16 00:00:53.491952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x761440, cid 4, qid 0 00:24:38.323 [2024-07-16 00:00:53.492029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.323 [2024-07-16 00:00:53.492039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.323 [2024-07-16 00:00:53.492044] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492048] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ddec0): datao=0, datal=4096, cccid=4 00:24:38.323 [2024-07-16 00:00:53.492052] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x761440) on tqpair(0x6ddec0): expected_datao=0, payload_size=4096 00:24:38.323 [2024-07-16 00:00:53.492057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492065] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492069] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.323 [2024-07-16 00:00:53.492127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.323 [2024-07-16 00:00:53.492131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x761440) on tqpair=0x6ddec0 00:24:38.323 [2024-07-16 00:00:53.492144] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:38.323 [2024-07-16 00:00:53.492153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ddec0) 00:24:38.323 [2024-07-16 00:00:53.492178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.323 [2024-07-16 00:00:53.492189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x761440, cid 4, qid 0 00:24:38.323 [2024-07-16 00:00:53.492270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.323 [2024-07-16 00:00:53.492280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.323 [2024-07-16 00:00:53.492285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492289] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ddec0): datao=0, datal=4096, cccid=4 00:24:38.323 [2024-07-16 00:00:53.492293] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x761440) on tqpair(0x6ddec0): expected_datao=0, payload_size=4096 00:24:38.323 [2024-07-16 00:00:53.492297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492304] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492307] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.323 [2024-07-16 00:00:53.492415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.323 [2024-07-16 00:00:53.492419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x761440) on tqpair=0x6ddec0 00:24:38.323 [2024-07-16 00:00:53.492435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ddec0) 00:24:38.323 [2024-07-16 00:00:53.492461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.323 [2024-07-16 00:00:53.492472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x761440, cid 4, qid 0 00:24:38.323 [2024-07-16 00:00:53.492545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.323 [2024-07-16 00:00:53.492555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.323 [2024-07-16 00:00:53.492561] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492564] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ddec0): datao=0, datal=4096, cccid=4 00:24:38.323 [2024-07-16 00:00:53.492572] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x761440) on tqpair(0x6ddec0): expected_datao=0, payload_size=4096 00:24:38.323 [2024-07-16 00:00:53.492577] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492583] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492587] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.323 [2024-07-16 00:00:53.492680] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.323 [2024-07-16 00:00:53.492684] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x761440) on tqpair=0x6ddec0 00:24:38.323 [2024-07-16 00:00:53.492695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492727] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492732] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:38.323 [2024-07-16 00:00:53.492736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:38.323 [2024-07-16 00:00:53.492741] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:38.323 [2024-07-16 00:00:53.492755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ddec0) 00:24:38.323 [2024-07-16 00:00:53.492765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.323 [2024-07-16 00:00:53.492772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6ddec0) 00:24:38.323 [2024-07-16 00:00:53.492785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.323 [2024-07-16 00:00:53.492797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x761440, cid 4, qid 0 00:24:38.323 [2024-07-16 00:00:53.492804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7615c0, cid 5, qid 0 00:24:38.323 [2024-07-16 00:00:53.492884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.323 [2024-07-16 00:00:53.492891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.323 [2024-07-16 00:00:53.492894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x761440) on tqpair=0x6ddec0 00:24:38.323 [2024-07-16 00:00:53.492904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.323 [2024-07-16 00:00:53.492910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.323 [2024-07-16 00:00:53.492916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7615c0) on tqpair=0x6ddec0 00:24:38.323 [2024-07-16 00:00:53.492928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.492932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6ddec0) 00:24:38.323 [2024-07-16 00:00:53.492938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.323 [2024-07-16 00:00:53.492948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7615c0, cid 5, qid 0 00:24:38.323 [2024-07-16 00:00:53.493016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.323 [2024-07-16 00:00:53.493023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.323 [2024-07-16 00:00:53.493026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.493030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7615c0) on tqpair=0x6ddec0 00:24:38.323 [2024-07-16 00:00:53.493039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.493042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6ddec0) 00:24:38.323 [2024-07-16 00:00:53.493049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.323 [2024-07-16 00:00:53.493058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7615c0, cid 5, qid 0 00:24:38.323 [2024-07-16 00:00:53.493122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.323 [2024-07-16 00:00:53.493129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.323 [2024-07-16 00:00:53.493132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.493136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7615c0) on tqpair=0x6ddec0 00:24:38.323 [2024-07-16 00:00:53.493145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.493149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6ddec0) 00:24:38.323 [2024-07-16 00:00:53.493155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.323 [2024-07-16 00:00:53.493165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7615c0, cid 5, qid 0 00:24:38.323 [2024-07-16 00:00:53.497236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.323 [2024-07-16 00:00:53.497245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.323 [2024-07-16 00:00:53.497249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.497252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7615c0) on tqpair=0x6ddec0 00:24:38.323 [2024-07-16 00:00:53.497268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.323 [2024-07-16 00:00:53.497272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6ddec0) 00:24:38.324 [2024-07-16 00:00:53.497278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.324 [2024-07-16 00:00:53.497285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6ddec0) 00:24:38.324 [2024-07-16 00:00:53.497295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.324 [2024-07-16 00:00:53.497302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x6ddec0) 00:24:38.324 [2024-07-16 00:00:53.497312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.324 [2024-07-16 00:00:53.497321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6ddec0) 00:24:38.324 [2024-07-16 00:00:53.497331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.324 [2024-07-16 00:00:53.497344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7615c0, cid 5, qid 0 00:24:38.324 [2024-07-16 00:00:53.497350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x761440, cid 4, qid 0 00:24:38.324 [2024-07-16 00:00:53.497357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x761740, cid 6, qid 0 00:24:38.324 [2024-07-16 00:00:53.497366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7618c0, cid 7, qid 0 00:24:38.324 [2024-07-16 00:00:53.497482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.324 [2024-07-16 00:00:53.497490] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.324 [2024-07-16 00:00:53.497496] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497500] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ddec0): datao=0, datal=8192, cccid=5 00:24:38.324 [2024-07-16 00:00:53.497504] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7615c0) on tqpair(0x6ddec0): expected_datao=0, payload_size=8192 00:24:38.324 [2024-07-16 00:00:53.497508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497656] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497663] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.324 [2024-07-16 00:00:53.497677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.324 [2024-07-16 00:00:53.497680] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497684] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ddec0): datao=0, datal=512, cccid=4 00:24:38.324 [2024-07-16 00:00:53.497688] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x761440) on tqpair(0x6ddec0): expected_datao=0, payload_size=512 00:24:38.324 [2024-07-16 00:00:53.497692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497698] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497702] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.324 [2024-07-16 00:00:53.497713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.324 [2024-07-16 00:00:53.497716] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497720] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ddec0): datao=0, datal=512, cccid=6 00:24:38.324 [2024-07-16 00:00:53.497724] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x761740) on tqpair(0x6ddec0): expected_datao=0, payload_size=512 00:24:38.324 [2024-07-16 00:00:53.497728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497734] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497738] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:38.324 [2024-07-16 00:00:53.497749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:38.324 [2024-07-16 00:00:53.497752] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497755] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6ddec0): datao=0, datal=4096, cccid=7 00:24:38.324 [2024-07-16 00:00:53.497760] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7618c0) on tqpair(0x6ddec0): expected_datao=0, payload_size=4096 00:24:38.324 [2024-07-16 00:00:53.497766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497773] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497776] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.324 [2024-07-16 00:00:53.497807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.324 [2024-07-16 00:00:53.497810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7615c0) on tqpair=0x6ddec0 00:24:38.324 [2024-07-16 00:00:53.497826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.324 [2024-07-16 00:00:53.497832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.324 [2024-07-16 00:00:53.497835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x761440) on tqpair=0x6ddec0 00:24:38.324 [2024-07-16 00:00:53.497848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.324 [2024-07-16 00:00:53.497854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.324 [2024-07-16 00:00:53.497858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x761740) on tqpair=0x6ddec0 00:24:38.324 [2024-07-16 00:00:53.497868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.324 [2024-07-16 00:00:53.497874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.324 [2024-07-16 00:00:53.497877] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.324 [2024-07-16 00:00:53.497881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7618c0) on tqpair=0x6ddec0 00:24:38.324 ===================================================== 00:24:38.324 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.324 ===================================================== 00:24:38.324 Controller Capabilities/Features 00:24:38.324 ================================ 00:24:38.324 Vendor ID: 8086 00:24:38.324 Subsystem Vendor ID: 8086 00:24:38.324 Serial Number: SPDK00000000000001 00:24:38.324 Model Number: SPDK bdev Controller 00:24:38.324 Firmware Version: 24.09 00:24:38.324 Recommended Arb Burst: 6 00:24:38.324 IEEE OUI Identifier: e4 d2 5c 00:24:38.324 Multi-path I/O 00:24:38.324 May have multiple subsystem ports: Yes 00:24:38.324 May have multiple controllers: Yes 00:24:38.324 Associated with SR-IOV VF: No 00:24:38.324 Max Data Transfer Size: 131072 00:24:38.324 Max Number of Namespaces: 32 00:24:38.324 Max Number of I/O Queues: 127 00:24:38.324 NVMe Specification Version (VS): 1.3 00:24:38.324 NVMe Specification Version (Identify): 1.3 00:24:38.324 Maximum Queue Entries: 128 00:24:38.324 Contiguous Queues Required: Yes 00:24:38.324 Arbitration Mechanisms Supported 00:24:38.324 Weighted Round Robin: Not Supported 00:24:38.324 Vendor Specific: Not Supported 00:24:38.324 Reset Timeout: 15000 ms 00:24:38.324 Doorbell Stride: 4 bytes 00:24:38.324 NVM Subsystem Reset: Not Supported 00:24:38.324 Command Sets Supported 00:24:38.324 NVM Command Set: Supported 00:24:38.324 Boot Partition: Not Supported 00:24:38.324 Memory Page Size Minimum: 4096 bytes 00:24:38.324 Memory Page Size Maximum: 4096 bytes 00:24:38.324 Persistent Memory Region: Not Supported 00:24:38.324 Optional Asynchronous Events Supported 00:24:38.324 Namespace Attribute Notices: Supported 00:24:38.324 Firmware Activation Notices: Not Supported 00:24:38.324 ANA Change Notices: Not Supported 00:24:38.324 PLE Aggregate Log Change Notices: Not Supported 00:24:38.324 LBA Status Info Alert Notices: Not Supported 00:24:38.324 EGE Aggregate Log Change Notices: Not Supported 00:24:38.324 Normal NVM Subsystem Shutdown event: Not Supported 00:24:38.324 Zone Descriptor Change Notices: Not Supported 00:24:38.324 Discovery Log Change Notices: Not Supported 00:24:38.324 Controller Attributes 00:24:38.324 128-bit Host Identifier: Supported 00:24:38.324 Non-Operational Permissive Mode: Not Supported 00:24:38.324 NVM Sets: Not Supported 00:24:38.324 Read Recovery Levels: Not Supported 00:24:38.324 Endurance Groups: Not Supported 00:24:38.324 Predictable Latency Mode: Not Supported 00:24:38.324 Traffic Based Keep ALive: Not Supported 00:24:38.324 Namespace Granularity: Not Supported 00:24:38.324 SQ Associations: Not Supported 00:24:38.324 UUID List: Not Supported 00:24:38.324 Multi-Domain Subsystem: Not Supported 00:24:38.324 Fixed Capacity Management: Not Supported 00:24:38.324 Variable Capacity Management: Not Supported 00:24:38.324 Delete Endurance Group: Not Supported 00:24:38.324 Delete NVM Set: Not Supported 00:24:38.324 Extended LBA Formats Supported: Not Supported 00:24:38.324 Flexible Data Placement Supported: Not Supported 00:24:38.324 00:24:38.324 Controller Memory Buffer Support 00:24:38.324 ================================ 00:24:38.324 Supported: No 00:24:38.324 00:24:38.324 Persistent Memory Region Support 00:24:38.324 ================================ 00:24:38.324 Supported: No 00:24:38.324 00:24:38.324 Admin Command Set Attributes 00:24:38.324 ============================ 00:24:38.324 Security Send/Receive: Not Supported 00:24:38.324 Format NVM: Not Supported 00:24:38.324 Firmware Activate/Download: Not Supported 00:24:38.324 Namespace Management: Not Supported 00:24:38.324 Device Self-Test: Not Supported 00:24:38.324 Directives: Not Supported 00:24:38.324 NVMe-MI: Not Supported 00:24:38.324 Virtualization Management: Not Supported 00:24:38.324 Doorbell Buffer Config: Not Supported 00:24:38.324 Get LBA Status Capability: Not Supported 00:24:38.324 Command & Feature Lockdown Capability: Not Supported 00:24:38.324 Abort Command Limit: 4 00:24:38.324 Async Event Request Limit: 4 00:24:38.324 Number of Firmware Slots: N/A 00:24:38.324 Firmware Slot 1 Read-Only: N/A 00:24:38.324 Firmware Activation Without Reset: N/A 00:24:38.324 Multiple Update Detection Support: N/A 00:24:38.324 Firmware Update Granularity: No Information Provided 00:24:38.324 Per-Namespace SMART Log: No 00:24:38.324 Asymmetric Namespace Access Log Page: Not Supported 00:24:38.324 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:38.324 Command Effects Log Page: Supported 00:24:38.324 Get Log Page Extended Data: Supported 00:24:38.324 Telemetry Log Pages: Not Supported 00:24:38.324 Persistent Event Log Pages: Not Supported 00:24:38.324 Supported Log Pages Log Page: May Support 00:24:38.324 Commands Supported & Effects Log Page: Not Supported 00:24:38.324 Feature Identifiers & Effects Log Page:May Support 00:24:38.324 NVMe-MI Commands & Effects Log Page: May Support 00:24:38.324 Data Area 4 for Telemetry Log: Not Supported 00:24:38.324 Error Log Page Entries Supported: 128 00:24:38.324 Keep Alive: Supported 00:24:38.324 Keep Alive Granularity: 10000 ms 00:24:38.324 00:24:38.324 NVM Command Set Attributes 00:24:38.324 ========================== 00:24:38.324 Submission Queue Entry Size 00:24:38.324 Max: 64 00:24:38.324 Min: 64 00:24:38.324 Completion Queue Entry Size 00:24:38.324 Max: 16 00:24:38.324 Min: 16 00:24:38.324 Number of Namespaces: 32 00:24:38.324 Compare Command: Supported 00:24:38.324 Write Uncorrectable Command: Not Supported 00:24:38.324 Dataset Management Command: Supported 00:24:38.324 Write Zeroes Command: Supported 00:24:38.324 Set Features Save Field: Not Supported 00:24:38.324 Reservations: Supported 00:24:38.324 Timestamp: Not Supported 00:24:38.324 Copy: Supported 00:24:38.324 Volatile Write Cache: Present 00:24:38.324 Atomic Write Unit (Normal): 1 00:24:38.324 Atomic Write Unit (PFail): 1 00:24:38.324 Atomic Compare & Write Unit: 1 00:24:38.324 Fused Compare & Write: Supported 00:24:38.324 Scatter-Gather List 00:24:38.324 SGL Command Set: Supported 00:24:38.324 SGL Keyed: Supported 00:24:38.324 SGL Bit Bucket Descriptor: Not Supported 00:24:38.324 SGL Metadata Pointer: Not Supported 00:24:38.324 Oversized SGL: Not Supported 00:24:38.324 SGL Metadata Address: Not Supported 00:24:38.324 SGL Offset: Supported 00:24:38.324 Transport SGL Data Block: Not Supported 00:24:38.324 Replay Protected Memory Block: Not Supported 00:24:38.324 00:24:38.324 Firmware Slot Information 00:24:38.324 ========================= 00:24:38.324 Active slot: 1 00:24:38.324 Slot 1 Firmware Revision: 24.09 00:24:38.324 00:24:38.324 00:24:38.324 Commands Supported and Effects 00:24:38.324 ============================== 00:24:38.324 Admin Commands 00:24:38.324 -------------- 00:24:38.324 Get Log Page (02h): Supported 00:24:38.324 Identify (06h): Supported 00:24:38.324 Abort (08h): Supported 00:24:38.324 Set Features (09h): Supported 00:24:38.324 Get Features (0Ah): Supported 00:24:38.324 Asynchronous Event Request (0Ch): Supported 00:24:38.324 Keep Alive (18h): Supported 00:24:38.324 I/O Commands 00:24:38.324 ------------ 00:24:38.324 Flush (00h): Supported LBA-Change 00:24:38.324 Write (01h): Supported LBA-Change 00:24:38.324 Read (02h): Supported 00:24:38.324 Compare (05h): Supported 00:24:38.324 Write Zeroes (08h): Supported LBA-Change 00:24:38.324 Dataset Management (09h): Supported LBA-Change 00:24:38.324 Copy (19h): Supported LBA-Change 00:24:38.324 00:24:38.324 Error Log 00:24:38.324 ========= 00:24:38.324 00:24:38.324 Arbitration 00:24:38.324 =========== 00:24:38.324 Arbitration Burst: 1 00:24:38.324 00:24:38.324 Power Management 00:24:38.324 ================ 00:24:38.324 Number of Power States: 1 00:24:38.324 Current Power State: Power State #0 00:24:38.325 Power State #0: 00:24:38.325 Max Power: 0.00 W 00:24:38.325 Non-Operational State: Operational 00:24:38.325 Entry Latency: Not Reported 00:24:38.325 Exit Latency: Not Reported 00:24:38.325 Relative Read Throughput: 0 00:24:38.325 Relative Read Latency: 0 00:24:38.325 Relative Write Throughput: 0 00:24:38.325 Relative Write Latency: 0 00:24:38.325 Idle Power: Not Reported 00:24:38.325 Active Power: Not Reported 00:24:38.325 Non-Operational Permissive Mode: Not Supported 00:24:38.325 00:24:38.325 Health Information 00:24:38.325 ================== 00:24:38.325 Critical Warnings: 00:24:38.325 Available Spare Space: OK 00:24:38.325 Temperature: OK 00:24:38.325 Device Reliability: OK 00:24:38.325 Read Only: No 00:24:38.325 Volatile Memory Backup: OK 00:24:38.325 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:38.325 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:38.325 Available Spare: 0% 00:24:38.325 Available Spare Threshold: 0% 00:24:38.325 Life Percentage Used:[2024-07-16 00:00:53.497978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.497983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.497990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.498002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7618c0, cid 7, qid 0 00:24:38.325 [2024-07-16 00:00:53.498076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.498083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.498087] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7618c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498122] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:38.325 [2024-07-16 00:00:53.498131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760e40) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.325 [2024-07-16 00:00:53.498142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x760fc0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.325 [2024-07-16 00:00:53.498151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x761140) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.325 [2024-07-16 00:00:53.498161] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.325 [2024-07-16 00:00:53.498175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498179] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.498189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.498201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.498277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.498284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.498288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.498312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.498325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.498397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.498404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.498407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498416] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:38.325 [2024-07-16 00:00:53.498420] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:38.325 [2024-07-16 00:00:53.498429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.498443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.498453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.498519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.498525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.498529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498532] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.498556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.498565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.498629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.498636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.498641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.498668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.498678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.498740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.498747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.498750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498770] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.498777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.498787] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.498850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.498856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.498860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.498886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.498896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.498956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.498962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.498965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.498978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.498986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.498992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.499002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.499080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.499087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.499091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.499106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.499121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.499131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.499191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.499197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.499201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.499214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.499227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.499242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.499309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.499315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.499319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499322] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.499332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.499346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.499355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.499418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.499424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.499428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.499441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.499455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.499464] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.499524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.499531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.325 [2024-07-16 00:00:53.499534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.325 [2024-07-16 00:00:53.499549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.325 [2024-07-16 00:00:53.499556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.325 [2024-07-16 00:00:53.499563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.325 [2024-07-16 00:00:53.499573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.325 [2024-07-16 00:00:53.499642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.325 [2024-07-16 00:00:53.499648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.499652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.499655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.499665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.499668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.499672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.499678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.499689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.499760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.499767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.499770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.499774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.499783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.499787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.499791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.499797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.499807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.499872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.499879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.499882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.499886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.499895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.499899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.499902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.499909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.499919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.499993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.500000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.500003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.500016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.500033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.500042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.500110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.500117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.500120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.500133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.500147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.500157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.500235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.500242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.500245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.500258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.500272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.500282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.500346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.500352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.500356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.500369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.500383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.500393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.500464] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.500470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.500474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.500487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.500502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.500513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.500579] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.500586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.500589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.500602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.500616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.500626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.500697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.500703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.500706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.500719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.500733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.500743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.500861] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.500867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.500871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.500884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.500898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.500907] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.500970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.500976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.500979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.500992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.500996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.501000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.501006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.501018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.501088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.501094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.501098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.501102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.326 [2024-07-16 00:00:53.501111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.501115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.501118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.326 [2024-07-16 00:00:53.501125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.326 [2024-07-16 00:00:53.501134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.326 [2024-07-16 00:00:53.501203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.326 [2024-07-16 00:00:53.501209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.326 [2024-07-16 00:00:53.501213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.326 [2024-07-16 00:00:53.501216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.587 [2024-07-16 00:00:53.501226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:38.587 [2024-07-16 00:00:53.505234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:38.587 [2024-07-16 00:00:53.505240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6ddec0) 00:24:38.587 [2024-07-16 00:00:53.505247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.587 [2024-07-16 00:00:53.505259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7612c0, cid 3, qid 0 00:24:38.587 [2024-07-16 00:00:53.505332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:38.587 [2024-07-16 00:00:53.505338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:38.587 [2024-07-16 00:00:53.505342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:38.587 [2024-07-16 00:00:53.505345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7612c0) on tqpair=0x6ddec0 00:24:38.587 [2024-07-16 00:00:53.505353] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:38.587 0% 00:24:38.587 Data Units Read: 0 00:24:38.587 Data Units Written: 0 00:24:38.587 Host Read Commands: 0 00:24:38.587 Host Write Commands: 0 00:24:38.587 Controller Busy Time: 0 minutes 00:24:38.587 Power Cycles: 0 00:24:38.587 Power On Hours: 0 hours 00:24:38.587 Unsafe Shutdowns: 0 00:24:38.587 Unrecoverable Media Errors: 0 00:24:38.587 Lifetime Error Log Entries: 0 00:24:38.587 Warning Temperature Time: 0 minutes 00:24:38.587 Critical Temperature Time: 0 minutes 00:24:38.587 00:24:38.587 Number of Queues 00:24:38.587 ================ 00:24:38.587 Number of I/O Submission Queues: 127 00:24:38.587 Number of I/O Completion Queues: 127 00:24:38.587 00:24:38.587 Active Namespaces 00:24:38.587 ================= 00:24:38.587 Namespace ID:1 00:24:38.587 Error Recovery Timeout: Unlimited 00:24:38.587 Command Set Identifier: NVM (00h) 00:24:38.587 Deallocate: Supported 00:24:38.587 Deallocated/Unwritten Error: Not Supported 00:24:38.587 Deallocated Read Value: Unknown 00:24:38.587 Deallocate in Write Zeroes: Not Supported 00:24:38.587 Deallocated Guard Field: 0xFFFF 00:24:38.587 Flush: Supported 00:24:38.587 Reservation: Supported 00:24:38.587 Namespace Sharing Capabilities: Multiple Controllers 00:24:38.587 Size (in LBAs): 131072 (0GiB) 00:24:38.587 Capacity (in LBAs): 131072 (0GiB) 00:24:38.587 Utilization (in LBAs): 131072 (0GiB) 00:24:38.587 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:38.587 EUI64: ABCDEF0123456789 00:24:38.587 UUID: fb5d11fa-2e38-45ea-8ea8-fc3030cf5d70 00:24:38.587 Thin Provisioning: Not Supported 00:24:38.587 Per-NS Atomic Units: Yes 00:24:38.587 Atomic Boundary Size (Normal): 0 00:24:38.587 Atomic Boundary Size (PFail): 0 00:24:38.587 Atomic Boundary Offset: 0 00:24:38.587 Maximum Single Source Range Length: 65535 00:24:38.587 Maximum Copy Length: 65535 00:24:38.587 Maximum Source Range Count: 1 00:24:38.587 NGUID/EUI64 Never Reused: No 00:24:38.587 Namespace Write Protected: No 00:24:38.587 Number of LBA Formats: 1 00:24:38.587 Current LBA Format: LBA Format #00 00:24:38.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:38.587 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.588 rmmod nvme_tcp 00:24:38.588 rmmod nvme_fabrics 00:24:38.588 rmmod nvme_keyring 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 564057 ']' 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 564057 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@942 -- # '[' -z 564057 ']' 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # kill -0 564057 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # uname 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 564057 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@960 -- # echo 'killing process with pid 564057' 00:24:38.588 killing process with pid 564057 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@961 -- # kill 564057 00:24:38.588 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # wait 564057 00:24:38.848 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.848 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.848 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.848 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.848 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.848 00:00:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.848 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.848 00:00:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.756 00:00:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.756 00:24:40.756 real 0m12.156s 00:24:40.756 user 0m8.090s 00:24:40.756 sys 0m6.508s 00:24:40.756 00:00:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:40.756 00:00:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:40.756 ************************************ 00:24:40.756 END TEST nvmf_identify 00:24:40.756 ************************************ 00:24:40.756 00:00:55 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:41.016 00:00:55 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:41.016 00:00:55 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:41.016 00:00:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:41.016 00:00:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:41.016 ************************************ 00:24:41.016 START TEST nvmf_perf 00:24:41.016 ************************************ 00:24:41.016 00:00:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:41.016 * Looking for test storage... 00:24:41.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:41.016 00:00:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:49.214 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:49.214 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.214 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:49.215 Found net devices under 0000:31:00.0: cvl_0_0 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:49.215 Found net devices under 0000:31:00.1: cvl_0_1 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.215 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:49.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:24:49.475 00:24:49.475 --- 10.0.0.2 ping statistics --- 00:24:49.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.475 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:24:49.475 00:24:49.475 --- 10.0.0.1 ping statistics --- 00:24:49.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.475 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=569013 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 569013 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@823 -- # '[' -z 569013 ']' 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:49.475 00:01:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:49.475 [2024-07-16 00:01:04.526119] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:24:49.475 [2024-07-16 00:01:04.526186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.475 [2024-07-16 00:01:04.606735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.735 [2024-07-16 00:01:04.683563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.735 [2024-07-16 00:01:04.683602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.735 [2024-07-16 00:01:04.683610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.735 [2024-07-16 00:01:04.683616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.735 [2024-07-16 00:01:04.683622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.735 [2024-07-16 00:01:04.683760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.735 [2024-07-16 00:01:04.683892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.735 [2024-07-16 00:01:04.683939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.735 [2024-07-16 00:01:04.683941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:50.304 00:01:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:50.304 00:01:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # return 0 00:24:50.304 00:01:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.304 00:01:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:50.304 00:01:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:50.304 00:01:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.304 00:01:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:50.304 00:01:05 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:50.875 00:01:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:50.875 00:01:05 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:50.876 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:50.876 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:51.136 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:51.136 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:51.136 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:51.136 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:51.136 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:51.395 [2024-07-16 00:01:06.337622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.395 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.395 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:51.395 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:51.655 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:51.655 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:51.915 00:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.915 [2024-07-16 00:01:07.024069] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.915 00:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:52.175 00:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:52.175 00:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:52.175 00:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:52.175 00:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:53.555 Initializing NVMe Controllers 00:24:53.555 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:53.555 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:53.555 Initialization complete. Launching workers. 00:24:53.555 ======================================================== 00:24:53.555 Latency(us) 00:24:53.555 Device Information : IOPS MiB/s Average min max 00:24:53.555 PCIE (0000:65:00.0) NSID 1 from core 0: 79671.65 311.22 401.20 13.29 5276.40 00:24:53.555 ======================================================== 00:24:53.555 Total : 79671.65 311.22 401.20 13.29 5276.40 00:24:53.555 00:24:53.555 00:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.937 Initializing NVMe Controllers 00:24:54.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:54.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:54.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:54.937 Initialization complete. Launching workers. 00:24:54.937 ======================================================== 00:24:54.937 Latency(us) 00:24:54.937 Device Information : IOPS MiB/s Average min max 00:24:54.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 115.93 0.45 8930.18 375.33 45787.57 00:24:54.937 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 67.96 0.27 15066.98 7845.82 50873.60 00:24:54.937 ======================================================== 00:24:54.937 Total : 183.88 0.72 11198.13 375.33 50873.60 00:24:54.937 00:24:54.937 00:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.878 Initializing NVMe Controllers 00:24:55.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:55.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:55.878 Initialization complete. Launching workers. 00:24:55.878 ======================================================== 00:24:55.878 Latency(us) 00:24:55.878 Device Information : IOPS MiB/s Average min max 00:24:55.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10394.99 40.61 3088.64 416.56 9999.07 00:24:55.878 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3788.00 14.80 8486.55 4621.02 18738.77 00:24:55.878 ======================================================== 00:24:55.878 Total : 14182.99 55.40 4530.31 416.56 18738.77 00:24:55.878 00:24:55.878 00:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:55.878 00:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:55.878 00:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:58.417 Initializing NVMe Controllers 00:24:58.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:58.417 Controller IO queue size 128, less than required. 00:24:58.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:58.417 Controller IO queue size 128, less than required. 00:24:58.417 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:58.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:58.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:58.417 Initialization complete. Launching workers. 00:24:58.417 ======================================================== 00:24:58.417 Latency(us) 00:24:58.417 Device Information : IOPS MiB/s Average min max 00:24:58.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1102.50 275.62 118861.72 67302.29 188905.40 00:24:58.417 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 583.00 145.75 226737.21 70064.49 343053.69 00:24:58.417 ======================================================== 00:24:58.417 Total : 1685.49 421.37 156174.93 67302.29 343053.69 00:24:58.417 00:24:58.417 00:01:13 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:58.677 No valid NVMe controllers or AIO or URING devices found 00:24:58.677 Initializing NVMe Controllers 00:24:58.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:58.677 Controller IO queue size 128, less than required. 00:24:58.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:58.677 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:58.677 Controller IO queue size 128, less than required. 00:24:58.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:58.677 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:58.677 WARNING: Some requested NVMe devices were skipped 00:24:58.677 00:01:13 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:01.219 Initializing NVMe Controllers 00:25:01.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:01.219 Controller IO queue size 128, less than required. 00:25:01.219 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:01.219 Controller IO queue size 128, less than required. 00:25:01.219 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:01.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:01.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:01.219 Initialization complete. Launching workers. 00:25:01.219 00:25:01.219 ==================== 00:25:01.219 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:01.219 TCP transport: 00:25:01.219 polls: 32425 00:25:01.219 idle_polls: 11375 00:25:01.220 sock_completions: 21050 00:25:01.220 nvme_completions: 4663 00:25:01.220 submitted_requests: 7048 00:25:01.220 queued_requests: 1 00:25:01.220 00:25:01.220 ==================== 00:25:01.220 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:01.220 TCP transport: 00:25:01.220 polls: 35510 00:25:01.220 idle_polls: 11535 00:25:01.220 sock_completions: 23975 00:25:01.220 nvme_completions: 4477 00:25:01.220 submitted_requests: 6692 00:25:01.220 queued_requests: 1 00:25:01.220 ======================================================== 00:25:01.220 Latency(us) 00:25:01.220 Device Information : IOPS MiB/s Average min max 00:25:01.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1164.63 291.16 112274.48 56117.46 187108.06 00:25:01.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1118.17 279.54 117388.49 53103.92 167997.51 00:25:01.220 ======================================================== 00:25:01.220 Total : 2282.80 570.70 114779.44 53103.92 187108.06 00:25:01.220 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.220 rmmod nvme_tcp 00:25:01.220 rmmod nvme_fabrics 00:25:01.220 rmmod nvme_keyring 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 569013 ']' 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 569013 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@942 -- # '[' -z 569013 ']' 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # kill -0 569013 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # uname 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:01.220 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 569013 00:25:01.480 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:01.480 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:01.480 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 569013' 00:25:01.480 killing process with pid 569013 00:25:01.480 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@961 -- # kill 569013 00:25:01.480 00:01:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # wait 569013 00:25:03.392 00:01:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.392 00:01:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.392 00:01:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.392 00:01:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.392 00:01:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.392 00:01:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.392 00:01:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.392 00:01:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.934 00:01:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.934 00:25:05.934 real 0m24.515s 00:25:05.934 user 0m56.949s 00:25:05.934 sys 0m8.563s 00:25:05.934 00:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:05.934 00:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:05.934 ************************************ 00:25:05.934 END TEST nvmf_perf 00:25:05.934 ************************************ 00:25:05.934 00:01:20 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:25:05.934 00:01:20 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:05.934 00:01:20 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:25:05.934 00:01:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:05.934 00:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:05.934 ************************************ 00:25:05.934 START TEST nvmf_fio_host 00:25:05.934 ************************************ 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:05.934 * Looking for test storage... 00:25:05.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.934 00:01:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:14.095 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:14.095 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:14.095 Found net devices under 0000:31:00.0: cvl_0_0 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.095 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:14.095 Found net devices under 0000:31:00.1: cvl_0_1 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:14.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.739 ms 00:25:14.096 00:25:14.096 --- 10.0.0.2 ping statistics --- 00:25:14.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.096 rtt min/avg/max/mdev = 0.739/0.739/0.739/0.000 ms 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:25:14.096 00:25:14.096 --- 10.0.0.1 ping statistics --- 00:25:14.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.096 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=576416 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 576416 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@823 -- # '[' -z 576416 ']' 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:14.096 00:01:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.096 [2024-07-16 00:01:28.980436] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:25:14.096 [2024-07-16 00:01:28.980504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.096 [2024-07-16 00:01:29.060907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.096 [2024-07-16 00:01:29.135695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.096 [2024-07-16 00:01:29.135734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.096 [2024-07-16 00:01:29.135741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.096 [2024-07-16 00:01:29.135748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.096 [2024-07-16 00:01:29.135753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.096 [2024-07-16 00:01:29.135897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.096 [2024-07-16 00:01:29.136019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.096 [2024-07-16 00:01:29.136179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.096 [2024-07-16 00:01:29.136180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.668 00:01:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:14.668 00:01:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # return 0 00:25:14.668 00:01:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:14.929 [2024-07-16 00:01:29.905173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.929 00:01:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:14.929 00:01:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.929 00:01:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.929 00:01:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:14.929 Malloc1 00:25:15.216 00:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.216 00:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:15.477 00:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.477 [2024-07-16 00:01:30.611181] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.477 00:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:15.738 00:01:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:16.307 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:16.307 fio-3.35 00:25:16.307 Starting 1 thread 00:25:18.896 00:25:18.896 test: (groupid=0, jobs=1): err= 0: pid=577109: Tue Jul 16 00:01:33 2024 00:25:18.896 read: IOPS=14.0k, BW=54.7MiB/s (57.3MB/s)(110MiB/2004msec) 00:25:18.896 slat (usec): min=2, max=289, avg= 2.21, stdev= 2.18 00:25:18.896 clat (usec): min=3354, max=8619, avg=5035.02, stdev=370.46 00:25:18.896 lat (usec): min=3357, max=8622, avg=5037.22, stdev=370.52 00:25:18.896 clat percentiles (usec): 00:25:18.896 | 1.00th=[ 4178], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4752], 00:25:18.896 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5014], 60.00th=[ 5145], 00:25:18.896 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:25:18.896 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 7504], 99.95th=[ 7767], 00:25:18.896 | 99.99th=[ 8586] 00:25:18.896 bw ( KiB/s): min=54656, max=56592, per=99.94%, avg=55966.00, stdev=883.54, samples=4 00:25:18.896 iops : min=13664, max=14148, avg=13991.50, stdev=220.89, samples=4 00:25:18.896 write: IOPS=14.0k, BW=54.7MiB/s (57.4MB/s)(110MiB/2004msec); 0 zone resets 00:25:18.896 slat (usec): min=2, max=216, avg= 2.30, stdev= 1.45 00:25:18.896 clat (usec): min=2527, max=7473, avg=4049.25, stdev=306.29 00:25:18.896 lat (usec): min=2545, max=7476, avg=4051.56, stdev=306.39 00:25:18.896 clat percentiles (usec): 00:25:18.896 | 1.00th=[ 3359], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3818], 00:25:18.896 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4113], 00:25:18.896 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4490], 00:25:18.896 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 6456], 99.95th=[ 6652], 00:25:18.896 | 99.99th=[ 7177] 00:25:18.896 bw ( KiB/s): min=55168, max=56360, per=100.00%, avg=56022.00, stdev=572.97, samples=4 00:25:18.896 iops : min=13792, max=14090, avg=14005.50, stdev=143.24, samples=4 00:25:18.896 lat (msec) : 4=21.59%, 10=78.41% 00:25:18.896 cpu : usr=70.59%, sys=25.61%, ctx=51, majf=0, minf=7 00:25:18.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:18.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:18.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:18.896 issued rwts: total=28055,28066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:18.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:18.896 00:25:18.896 Run status group 0 (all jobs): 00:25:18.896 READ: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=110MiB (115MB), run=2004-2004msec 00:25:18.896 WRITE: bw=54.7MiB/s (57.4MB/s), 54.7MiB/s-54.7MiB/s (57.4MB/s-57.4MB/s), io=110MiB (115MB), run=2004-2004msec 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:18.896 00:01:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:18.896 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:18.896 fio-3.35 00:25:18.896 Starting 1 thread 00:25:21.441 00:25:21.441 test: (groupid=0, jobs=1): err= 0: pid=577777: Tue Jul 16 00:01:36 2024 00:25:21.441 read: IOPS=8907, BW=139MiB/s (146MB/s)(279MiB/2007msec) 00:25:21.441 slat (usec): min=3, max=110, avg= 3.65, stdev= 1.63 00:25:21.441 clat (usec): min=2150, max=19146, avg=8876.64, stdev=2239.02 00:25:21.441 lat (usec): min=2154, max=19150, avg=8880.29, stdev=2239.14 00:25:21.441 clat percentiles (usec): 00:25:21.441 | 1.00th=[ 4146], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6849], 00:25:21.441 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9372], 00:25:21.441 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11863], 95.00th=[12649], 00:25:21.441 | 99.00th=[14222], 99.50th=[15008], 99.90th=[15664], 99.95th=[15926], 00:25:21.441 | 99.99th=[17433] 00:25:21.441 bw ( KiB/s): min=63648, max=75200, per=49.77%, avg=70936.00, stdev=5216.06, samples=4 00:25:21.441 iops : min= 3978, max= 4700, avg=4433.50, stdev=326.00, samples=4 00:25:21.441 write: IOPS=5282, BW=82.5MiB/s (86.6MB/s)(144MiB/1750msec); 0 zone resets 00:25:21.441 slat (usec): min=40, max=408, avg=41.21, stdev= 7.95 00:25:21.441 clat (usec): min=3646, max=17150, avg=9571.61, stdev=1646.19 00:25:21.441 lat (usec): min=3686, max=17190, avg=9612.82, stdev=1647.84 00:25:21.441 clat percentiles (usec): 00:25:21.441 | 1.00th=[ 6128], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8160], 00:25:21.441 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:25:21.441 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11600], 95.00th=[12387], 00:25:21.441 | 99.00th=[14222], 99.50th=[15008], 99.90th=[15664], 99.95th=[16188], 00:25:21.441 | 99.99th=[17171] 00:25:21.441 bw ( KiB/s): min=67008, max=77824, per=87.50%, avg=73960.00, stdev=5124.89, samples=4 00:25:21.441 iops : min= 4188, max= 4864, avg=4622.50, stdev=320.31, samples=4 00:25:21.441 lat (msec) : 4=0.60%, 10=66.03%, 20=33.38% 00:25:21.441 cpu : usr=82.10%, sys=14.91%, ctx=13, majf=0, minf=20 00:25:21.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:21.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:21.441 issued rwts: total=17878,9245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:21.441 00:25:21.441 Run status group 0 (all jobs): 00:25:21.441 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=279MiB (293MB), run=2007-2007msec 00:25:21.441 WRITE: bw=82.5MiB/s (86.6MB/s), 82.5MiB/s-82.5MiB/s (86.6MB/s-86.6MB/s), io=144MiB (151MB), run=1750-1750msec 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:21.441 rmmod nvme_tcp 00:25:21.441 rmmod nvme_fabrics 00:25:21.441 rmmod nvme_keyring 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 576416 ']' 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 576416 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@942 -- # '[' -z 576416 ']' 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # kill -0 576416 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # uname 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 576416 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 576416' 00:25:21.441 killing process with pid 576416 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@961 -- # kill 576416 00:25:21.441 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # wait 576416 00:25:21.701 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:21.701 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:21.701 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:21.701 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.701 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:21.701 00:01:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.701 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.701 00:01:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.611 00:01:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:23.611 00:25:23.611 real 0m18.202s 00:25:23.611 user 1m9.189s 00:25:23.611 sys 0m8.012s 00:25:23.611 00:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:23.611 00:01:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.611 ************************************ 00:25:23.611 END TEST nvmf_fio_host 00:25:23.611 ************************************ 00:25:23.872 00:01:38 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:25:23.872 00:01:38 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:23.872 00:01:38 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:25:23.872 00:01:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:23.872 00:01:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.872 ************************************ 00:25:23.872 START TEST nvmf_failover 00:25:23.872 ************************************ 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:23.872 * Looking for test storage... 00:25:23.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.872 00:01:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.872 00:01:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.873 00:01:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.873 00:01:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:23.873 00:01:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:23.873 00:01:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:23.873 00:01:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.009 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:32.010 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:32.010 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:32.010 Found net devices under 0000:31:00.0: cvl_0_0 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:32.010 Found net devices under 0000:31:00.1: cvl_0_1 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.010 00:01:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.010 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.010 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.010 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.010 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.010 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.010 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.271 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.271 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.271 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:25:32.271 00:25:32.271 --- 10.0.0.2 ping statistics --- 00:25:32.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.272 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:25:32.272 00:25:32.272 --- 10.0.0.1 ping statistics --- 00:25:32.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.272 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=582939 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 582939 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 582939 ']' 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:32.272 00:01:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.272 [2024-07-16 00:01:47.409564] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:25:32.272 [2024-07-16 00:01:47.409629] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.602 [2024-07-16 00:01:47.507291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:32.602 [2024-07-16 00:01:47.601979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.602 [2024-07-16 00:01:47.602044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.602 [2024-07-16 00:01:47.602052] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.602 [2024-07-16 00:01:47.602059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.602 [2024-07-16 00:01:47.602065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.602 [2024-07-16 00:01:47.602219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.602 [2024-07-16 00:01:47.602387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.602 [2024-07-16 00:01:47.602387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.202 00:01:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:33.202 00:01:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:25:33.202 00:01:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.202 00:01:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.202 00:01:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.202 00:01:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.202 00:01:48 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:33.202 [2024-07-16 00:01:48.380386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.462 00:01:48 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:33.462 Malloc0 00:25:33.462 00:01:48 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.723 00:01:48 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:33.983 00:01:48 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.983 [2024-07-16 00:01:49.063746] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.983 00:01:49 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:34.244 [2024-07-16 00:01:49.232153] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.244 00:01:49 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:34.244 [2024-07-16 00:01:49.404619] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=583456 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 583456 /var/tmp/bdevperf.sock 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 583456 ']' 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:34.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:34.504 00:01:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:35.076 00:01:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:35.076 00:01:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:25:35.076 00:01:50 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.336 NVMe0n1 00:25:35.336 00:01:50 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.597 00:25:35.597 00:01:50 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=583596 00:25:35.597 00:01:50 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:35.597 00:01:50 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:36.981 00:01:51 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.981 [2024-07-16 00:01:51.916455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.981 [2024-07-16 00:01:51.916496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.981 [2024-07-16 00:01:51.916506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.981 [2024-07-16 00:01:51.916511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.981 [2024-07-16 00:01:51.916515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.981 [2024-07-16 00:01:51.916520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.981 [2024-07-16 00:01:51.916525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.981 [2024-07-16 00:01:51.916529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.981 [2024-07-16 00:01:51.916533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.981 [2024-07-16 00:01:51.916538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916587] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 [2024-07-16 00:01:51.916592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1934770 is same with the state(5) to be set 00:25:36.982 00:01:51 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:40.296 00:01:54 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.296 00:25:40.296 00:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:40.558 [2024-07-16 00:01:55.519599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 [2024-07-16 00:01:55.519708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935e70 is same with the state(5) to be set 00:25:40.558 00:01:55 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:43.856 00:01:58 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.856 [2024-07-16 00:01:58.696739] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.856 00:01:58 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:44.796 00:01:59 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:44.796 [2024-07-16 00:01:59.879565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.796 [2024-07-16 00:01:59.879597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.796 [2024-07-16 00:01:59.879603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.796 [2024-07-16 00:01:59.879607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.796 [2024-07-16 00:01:59.879612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.796 [2024-07-16 00:01:59.879616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.796 [2024-07-16 00:01:59.879620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.796 [2024-07-16 00:01:59.879625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.796 [2024-07-16 00:01:59.879629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.796 [2024-07-16 00:01:59.879639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879823] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879877] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.879997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.797 [2024-07-16 00:01:59.880050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.798 [2024-07-16 00:01:59.880054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.798 [2024-07-16 00:01:59.880060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.798 [2024-07-16 00:01:59.880064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aedfd0 is same with the state(5) to be set 00:25:44.798 00:01:59 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 583596 00:25:51.431 0 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 583456 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 583456 ']' 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 583456 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 583456 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 583456' 00:25:51.431 killing process with pid 583456 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 583456 00:25:51.431 00:02:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 583456 00:25:51.431 00:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:51.431 [2024-07-16 00:01:49.482260] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:25:51.431 [2024-07-16 00:01:49.482321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid583456 ] 00:25:51.431 [2024-07-16 00:01:49.548540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.431 [2024-07-16 00:01:49.612907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.431 Running I/O for 15 seconds... 00:25:51.431 [2024-07-16 00:01:51.918486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.431 [2024-07-16 00:01:51.918755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.431 [2024-07-16 00:01:51.918764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.432 [2024-07-16 00:01:51.918987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.918997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.432 [2024-07-16 00:01:51.919460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.432 [2024-07-16 00:01:51.919467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.433 [2024-07-16 00:01:51.919613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.433 [2024-07-16 00:01:51.919629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.433 [2024-07-16 00:01:51.919646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.433 [2024-07-16 00:01:51.919662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.433 [2024-07-16 00:01:51.919678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.433 [2024-07-16 00:01:51.919696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.433 [2024-07-16 00:01:51.919712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.919987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.919996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.433 [2024-07-16 00:01:51.920156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.433 [2024-07-16 00:01:51.920164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97456 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97512 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97520 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97528 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97536 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97552 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97560 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97568 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97576 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97584 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97592 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97600 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97608 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97616 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97624 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96888 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96896 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.434 [2024-07-16 00:01:51.920827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96904 len:8 PRP1 0x0 PRP2 0x0 00:25:51.434 [2024-07-16 00:01:51.920833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.434 [2024-07-16 00:01:51.920841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.434 [2024-07-16 00:01:51.920847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.435 [2024-07-16 00:01:51.920853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96912 len:8 PRP1 0x0 PRP2 0x0 00:25:51.435 [2024-07-16 00:01:51.920859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:51.920867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.435 [2024-07-16 00:01:51.920872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.435 [2024-07-16 00:01:51.920878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96920 len:8 PRP1 0x0 PRP2 0x0 00:25:51.435 [2024-07-16 00:01:51.920885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:51.920892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.435 [2024-07-16 00:01:51.920898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.435 [2024-07-16 00:01:51.920903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96928 len:8 PRP1 0x0 PRP2 0x0 00:25:51.435 [2024-07-16 00:01:51.920910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:51.920945] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x118bdf0 was disconnected and freed. reset controller. 00:25:51.435 [2024-07-16 00:01:51.920954] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:51.435 [2024-07-16 00:01:51.920975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.435 [2024-07-16 00:01:51.920983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:51.920992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.435 [2024-07-16 00:01:51.920999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:51.921007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.435 [2024-07-16 00:01:51.921014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:51.930657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.435 [2024-07-16 00:01:51.930692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:51.930702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.435 [2024-07-16 00:01:51.930753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118fea0 (9): Bad file descriptor 00:25:51.435 [2024-07-16 00:01:51.935049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.435 [2024-07-16 00:01:51.980190] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:51.435 [2024-07-16 00:01:55.521550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.435 [2024-07-16 00:01:55.521814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.435 [2024-07-16 00:01:55.521990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.435 [2024-07-16 00:01:55.521997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.436 [2024-07-16 00:01:55.522458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.436 [2024-07-16 00:01:55.522695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.436 [2024-07-16 00:01:55.522703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.522989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.522996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.437 [2024-07-16 00:01:55.523245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.437 [2024-07-16 00:01:55.523272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40320 len:8 PRP1 0x0 PRP2 0x0 00:25:51.437 [2024-07-16 00:01:55.523279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.437 [2024-07-16 00:01:55.523295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.437 [2024-07-16 00:01:55.523301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39632 len:8 PRP1 0x0 PRP2 0x0 00:25:51.437 [2024-07-16 00:01:55.523308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.437 [2024-07-16 00:01:55.523321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.437 [2024-07-16 00:01:55.523327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39640 len:8 PRP1 0x0 PRP2 0x0 00:25:51.437 [2024-07-16 00:01:55.523334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.437 [2024-07-16 00:01:55.523347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.437 [2024-07-16 00:01:55.523353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39648 len:8 PRP1 0x0 PRP2 0x0 00:25:51.437 [2024-07-16 00:01:55.523360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.437 [2024-07-16 00:01:55.523372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.437 [2024-07-16 00:01:55.523379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39656 len:8 PRP1 0x0 PRP2 0x0 00:25:51.437 [2024-07-16 00:01:55.523386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.437 [2024-07-16 00:01:55.523394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.437 [2024-07-16 00:01:55.523400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.437 [2024-07-16 00:01:55.523405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39664 len:8 PRP1 0x0 PRP2 0x0 00:25:51.437 [2024-07-16 00:01:55.523412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39672 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39680 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39688 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40328 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40336 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40344 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40352 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40360 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40368 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40376 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40384 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40392 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40400 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39696 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39704 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39712 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39720 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39728 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39736 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39744 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.523945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.438 [2024-07-16 00:01:55.523951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.438 [2024-07-16 00:01:55.523957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39752 len:8 PRP1 0x0 PRP2 0x0 00:25:51.438 [2024-07-16 00:01:55.523964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.524000] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11be960 was disconnected and freed. reset controller. 00:25:51.438 [2024-07-16 00:01:55.524009] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:51.438 [2024-07-16 00:01:55.524028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.438 [2024-07-16 00:01:55.524036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.524045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.438 [2024-07-16 00:01:55.524053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.524060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.438 [2024-07-16 00:01:55.524067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.524075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.438 [2024-07-16 00:01:55.524083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.438 [2024-07-16 00:01:55.533804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.438 [2024-07-16 00:01:55.533874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118fea0 (9): Bad file descriptor 00:25:51.438 [2024-07-16 00:01:55.538121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.438 [2024-07-16 00:01:55.573687] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:51.438 [2024-07-16 00:01:59.881811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.881846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.881862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.881870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.881880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.881887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.881897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.881904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.881913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.881921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.881930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.881938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.881947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.881954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.881963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.881971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.881980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.881987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.881996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.439 [2024-07-16 00:01:59.882459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.439 [2024-07-16 00:01:59.882467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.882764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.882988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.882995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.883011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.883028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.883044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.883061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.883077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.883095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.883111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:51.440 [2024-07-16 00:01:59.883130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.883146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.883163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.440 [2024-07-16 00:01:59.883172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.440 [2024-07-16 00:01:59.883179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.441 [2024-07-16 00:01:59.883663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.441 [2024-07-16 00:01:59.883692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47544 len:8 PRP1 0x0 PRP2 0x0 00:25:51.441 [2024-07-16 00:01:59.883699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.441 [2024-07-16 00:01:59.883715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.441 [2024-07-16 00:01:59.883722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47552 len:8 PRP1 0x0 PRP2 0x0 00:25:51.441 [2024-07-16 00:01:59.883730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.441 [2024-07-16 00:01:59.883743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.441 [2024-07-16 00:01:59.883749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47560 len:8 PRP1 0x0 PRP2 0x0 00:25:51.441 [2024-07-16 00:01:59.883757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.441 [2024-07-16 00:01:59.883770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.441 [2024-07-16 00:01:59.883776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47568 len:8 PRP1 0x0 PRP2 0x0 00:25:51.441 [2024-07-16 00:01:59.883783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.441 [2024-07-16 00:01:59.883796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.441 [2024-07-16 00:01:59.883802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47576 len:8 PRP1 0x0 PRP2 0x0 00:25:51.441 [2024-07-16 00:01:59.883809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.441 [2024-07-16 00:01:59.883822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.441 [2024-07-16 00:01:59.883828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47584 len:8 PRP1 0x0 PRP2 0x0 00:25:51.441 [2024-07-16 00:01:59.883835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.441 [2024-07-16 00:01:59.883848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.441 [2024-07-16 00:01:59.883854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47592 len:8 PRP1 0x0 PRP2 0x0 00:25:51.441 [2024-07-16 00:01:59.883861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.441 [2024-07-16 00:01:59.883875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.441 [2024-07-16 00:01:59.883881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47600 len:8 PRP1 0x0 PRP2 0x0 00:25:51.441 [2024-07-16 00:01:59.883888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.441 [2024-07-16 00:01:59.883896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.441 [2024-07-16 00:01:59.883901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.883907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47608 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.883914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.883922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.883927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.883935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47616 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.883942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.883950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.883956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.883962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47624 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.883969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.883977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.883982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.883988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47632 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.883995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.884003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.884009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.884015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47640 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.884022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.884029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.884035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.884041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47648 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.884051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.884059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.884064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.884070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47656 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.884077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.884085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.884090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.895756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47664 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.895785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.895799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.895806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.895812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47672 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.895819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.895827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.895836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.895843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47680 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.895850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.895858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:51.442 [2024-07-16 00:01:59.895863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:51.442 [2024-07-16 00:01:59.895869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47688 len:8 PRP1 0x0 PRP2 0x0 00:25:51.442 [2024-07-16 00:01:59.895876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.895916] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11cc5b0 was disconnected and freed. reset controller. 00:25:51.442 [2024-07-16 00:01:59.895925] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:51.442 [2024-07-16 00:01:59.895953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.442 [2024-07-16 00:01:59.895962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.895972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.442 [2024-07-16 00:01:59.895979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.895987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.442 [2024-07-16 00:01:59.895995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.896002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.442 [2024-07-16 00:01:59.896009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.442 [2024-07-16 00:01:59.896017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.442 [2024-07-16 00:01:59.896057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118fea0 (9): Bad file descriptor 00:25:51.442 [2024-07-16 00:01:59.900191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.442 [2024-07-16 00:01:59.980361] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:51.442 00:25:51.442 Latency(us) 00:25:51.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.442 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:51.442 Verification LBA range: start 0x0 length 0x4000 00:25:51.442 NVMe0n1 : 15.01 11277.51 44.05 356.92 0.00 10973.79 802.13 22282.24 00:25:51.442 =================================================================================================================== 00:25:51.442 Total : 11277.51 44.05 356.92 0.00 10973.79 802.13 22282.24 00:25:51.442 Received shutdown signal, test time was about 15.000000 seconds 00:25:51.442 00:25:51.442 Latency(us) 00:25:51.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.442 =================================================================================================================== 00:25:51.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=586497 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 586497 /var/tmp/bdevperf.sock 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 586497 ']' 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:51.442 00:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:52.039 00:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:52.039 00:02:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:25:52.039 00:02:06 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:52.039 [2024-07-16 00:02:07.086633] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:52.039 00:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:52.299 [2024-07-16 00:02:07.255034] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:52.299 00:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:52.558 NVMe0n1 00:25:52.558 00:02:07 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:53.128 00:25:53.128 00:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:53.128 00:25:53.128 00:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:53.128 00:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:53.387 00:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:53.647 00:02:08 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:56.945 00:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:56.945 00:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:56.945 00:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=587709 00:25:56.945 00:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 587709 00:25:56.945 00:02:11 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:57.887 0 00:25:57.887 00:02:12 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:57.887 [2024-07-16 00:02:06.165653] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:25:57.887 [2024-07-16 00:02:06.165708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid586497 ] 00:25:57.887 [2024-07-16 00:02:06.232440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.887 [2024-07-16 00:02:06.295246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.887 [2024-07-16 00:02:08.592660] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:57.887 [2024-07-16 00:02:08.592704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.887 [2024-07-16 00:02:08.592716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.887 [2024-07-16 00:02:08.592725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.887 [2024-07-16 00:02:08.592732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.887 [2024-07-16 00:02:08.592741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.887 [2024-07-16 00:02:08.592748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.887 [2024-07-16 00:02:08.592755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.887 [2024-07-16 00:02:08.592762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.887 [2024-07-16 00:02:08.592769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.887 [2024-07-16 00:02:08.592798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:57.887 [2024-07-16 00:02:08.592812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f1ea0 (9): Bad file descriptor 00:25:57.887 [2024-07-16 00:02:08.643150] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:57.887 Running I/O for 1 seconds... 00:25:57.887 00:25:57.887 Latency(us) 00:25:57.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.887 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:57.887 Verification LBA range: start 0x0 length 0x4000 00:25:57.887 NVMe0n1 : 1.01 11654.80 45.53 0.00 0.00 10927.83 2566.83 10103.47 00:25:57.887 =================================================================================================================== 00:25:57.887 Total : 11654.80 45.53 0.00 0.00 10927.83 2566.83 10103.47 00:25:57.887 00:02:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:57.887 00:02:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:58.153 00:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:58.153 00:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:58.153 00:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:58.415 00:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:58.415 00:02:13 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:01.717 00:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:01.717 00:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:01.717 00:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 586497 00:26:01.717 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 586497 ']' 00:26:01.717 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 586497 00:26:01.717 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:26:01.718 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:01.718 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 586497 00:26:01.718 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:26:01.718 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:26:01.718 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 586497' 00:26:01.718 killing process with pid 586497 00:26:01.718 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 586497 00:26:01.718 00:02:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 586497 00:26:01.978 00:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:01.979 00:02:16 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.979 00:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:01.979 00:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:01.979 00:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:01.979 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:01.979 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:01.979 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:01.979 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:01.979 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:01.979 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:01.979 rmmod nvme_tcp 00:26:01.979 rmmod nvme_fabrics 00:26:02.239 rmmod nvme_keyring 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 582939 ']' 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 582939 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 582939 ']' 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 582939 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 582939 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 582939' 00:26:02.239 killing process with pid 582939 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 582939 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 582939 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.239 00:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.784 00:02:19 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:04.784 00:26:04.784 real 0m40.597s 00:26:04.784 user 2m1.931s 00:26:04.784 sys 0m9.053s 00:26:04.784 00:02:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:04.784 00:02:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:04.784 ************************************ 00:26:04.784 END TEST nvmf_failover 00:26:04.784 ************************************ 00:26:04.784 00:02:19 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:26:04.784 00:02:19 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:04.784 00:02:19 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:26:04.784 00:02:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:04.784 00:02:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:04.784 ************************************ 00:26:04.784 START TEST nvmf_host_discovery 00:26:04.784 ************************************ 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:04.784 * Looking for test storage... 00:26:04.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:04.784 00:02:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:12.927 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:12.927 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:12.927 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:12.928 Found net devices under 0000:31:00.0: cvl_0_0 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:12.928 Found net devices under 0000:31:00.1: cvl_0_1 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:12.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:26:12.928 00:26:12.928 --- 10.0.0.2 ping statistics --- 00:26:12.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.928 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.364 ms 00:26:12.928 00:26:12.928 --- 10.0.0.1 ping statistics --- 00:26:12.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.928 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:12.928 00:02:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=593519 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 593519 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@823 -- # '[' -z 593519 ']' 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:12.928 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:12.928 [2024-07-16 00:02:28.069328] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:26:12.928 [2024-07-16 00:02:28.069418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.189 [2024-07-16 00:02:28.171464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.189 [2024-07-16 00:02:28.263690] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.189 [2024-07-16 00:02:28.263751] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.189 [2024-07-16 00:02:28.263759] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.189 [2024-07-16 00:02:28.263766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.189 [2024-07-16 00:02:28.263772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.189 [2024-07-16 00:02:28.263796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # return 0 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.762 [2024-07-16 00:02:28.894964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.762 [2024-07-16 00:02:28.907164] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.762 null0 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.762 null1 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=593549 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 593549 /tmp/host.sock 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@823 -- # '[' -z 593549 ']' 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/tmp/host.sock 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:13.762 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:13.762 00:02:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.023 [2024-07-16 00:02:29.002555] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:26:14.024 [2024-07-16 00:02:29.002620] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid593549 ] 00:26:14.024 [2024-07-16 00:02:29.074306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.024 [2024-07-16 00:02:29.150442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # return 0 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:14.595 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.856 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:14.857 00:02:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:14.857 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.118 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.119 [2024-07-16 00:02:30.110601] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:15.119 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:15.379 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == \n\v\m\e\0 ]] 00:26:15.379 00:02:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # sleep 1 00:26:15.950 [2024-07-16 00:02:30.833507] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:15.950 [2024-07-16 00:02:30.833529] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:15.951 [2024-07-16 00:02:30.833542] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:15.951 [2024-07-16 00:02:30.961963] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:15.951 [2024-07-16 00:02:31.024530] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:15.951 [2024-07-16 00:02:31.024554] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:16.211 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:16.211 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:16.211 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:26:16.211 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:16.211 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.212 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 == \4\4\2\0 ]] 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:16.472 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.473 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.734 [2024-07-16 00:02:31.670629] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:16.734 [2024-07-16 00:02:31.671686] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:16.734 [2024-07-16 00:02:31.671711] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.734 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:16.735 [2024-07-16 00:02:31.801109] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:16.735 00:02:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # sleep 1 00:26:16.735 [2024-07-16 00:02:31.900904] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:16.735 [2024-07-16 00:02:31.900921] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:16.735 [2024-07-16 00:02:31.900926] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:17.732 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.994 [2024-07-16 00:02:32.926509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.994 [2024-07-16 00:02:32.926541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.994 [2024-07-16 00:02:32.926553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.994 [2024-07-16 00:02:32.926561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.994 [2024-07-16 00:02:32.926569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.994 [2024-07-16 00:02:32.926577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.994 [2024-07-16 00:02:32.926585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.994 [2024-07-16 00:02:32.926592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.994 [2024-07-16 00:02:32.926599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba19a0 is same with the state(5) to be set 00:26:17.994 [2024-07-16 00:02:32.926858] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:17.994 [2024-07-16 00:02:32.926873] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:26:17.994 [2024-07-16 00:02:32.936519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba19a0 (9): Bad file descriptor 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.994 [2024-07-16 00:02:32.946558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.994 [2024-07-16 00:02:32.946982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.994 [2024-07-16 00:02:32.946996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba19a0 with addr=10.0.0.2, port=4420 00:26:17.994 [2024-07-16 00:02:32.947005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba19a0 is same with the state(5) to be set 00:26:17.994 [2024-07-16 00:02:32.947017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba19a0 (9): Bad file descriptor 00:26:17.994 [2024-07-16 00:02:32.947035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.994 [2024-07-16 00:02:32.947043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.994 [2024-07-16 00:02:32.947051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.994 [2024-07-16 00:02:32.947063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.994 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:17.994 [2024-07-16 00:02:32.956614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.994 [2024-07-16 00:02:32.956964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.994 [2024-07-16 00:02:32.956976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba19a0 with addr=10.0.0.2, port=4420 00:26:17.994 [2024-07-16 00:02:32.956983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba19a0 is same with the state(5) to be set 00:26:17.994 [2024-07-16 00:02:32.956994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba19a0 (9): Bad file descriptor 00:26:17.994 [2024-07-16 00:02:32.957004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.994 [2024-07-16 00:02:32.957010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.994 [2024-07-16 00:02:32.957017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.994 [2024-07-16 00:02:32.957027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.994 [2024-07-16 00:02:32.966666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.994 [2024-07-16 00:02:32.967008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.995 [2024-07-16 00:02:32.967019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba19a0 with addr=10.0.0.2, port=4420 00:26:17.995 [2024-07-16 00:02:32.967026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba19a0 is same with the state(5) to be set 00:26:17.995 [2024-07-16 00:02:32.967037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba19a0 (9): Bad file descriptor 00:26:17.995 [2024-07-16 00:02:32.967047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.995 [2024-07-16 00:02:32.967053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.995 [2024-07-16 00:02:32.967060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.995 [2024-07-16 00:02:32.967070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.995 [2024-07-16 00:02:32.976718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.995 [2024-07-16 00:02:32.977085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.995 [2024-07-16 00:02:32.977098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba19a0 with addr=10.0.0.2, port=4420 00:26:17.995 [2024-07-16 00:02:32.977110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba19a0 is same with the state(5) to be set 00:26:17.995 [2024-07-16 00:02:32.977122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba19a0 (9): Bad file descriptor 00:26:17.995 [2024-07-16 00:02:32.977139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.995 [2024-07-16 00:02:32.977146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.995 [2024-07-16 00:02:32.977153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.995 [2024-07-16 00:02:32.977163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:17.995 [2024-07-16 00:02:32.986776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.995 [2024-07-16 00:02:32.987029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.995 [2024-07-16 00:02:32.987041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba19a0 with addr=10.0.0.2, port=4420 00:26:17.995 [2024-07-16 00:02:32.987048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba19a0 is same with the state(5) to be set 00:26:17.995 [2024-07-16 00:02:32.987059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba19a0 (9): Bad file descriptor 00:26:17.995 [2024-07-16 00:02:32.987069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.995 [2024-07-16 00:02:32.987076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.995 [2024-07-16 00:02:32.987083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.995 [2024-07-16 00:02:32.987093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 00:02:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:17.995 [2024-07-16 00:02:32.996828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.995 [2024-07-16 00:02:32.997169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.995 [2024-07-16 00:02:32.997182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba19a0 with addr=10.0.0.2, port=4420 00:26:17.995 [2024-07-16 00:02:32.997189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba19a0 is same with the state(5) to be set 00:26:17.995 [2024-07-16 00:02:32.997201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba19a0 (9): Bad file descriptor 00:26:17.995 [2024-07-16 00:02:32.997215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.995 [2024-07-16 00:02:32.997221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.995 [2024-07-16 00:02:32.997228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.995 [2024-07-16 00:02:32.997244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.995 [2024-07-16 00:02:33.006882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:17.995 [2024-07-16 00:02:33.007222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.995 [2024-07-16 00:02:33.007237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba19a0 with addr=10.0.0.2, port=4420 00:26:17.995 [2024-07-16 00:02:33.007245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba19a0 is same with the state(5) to be set 00:26:17.995 [2024-07-16 00:02:33.007256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba19a0 (9): Bad file descriptor 00:26:17.995 [2024-07-16 00:02:33.007266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:17.995 [2024-07-16 00:02:33.007272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:17.995 [2024-07-16 00:02:33.007279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:17.995 [2024-07-16 00:02:33.007289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.995 [2024-07-16 00:02:33.014557] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:17.995 [2024-07-16 00:02:33.014574] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4421 == \4\4\2\1 ]] 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:17.995 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:17.996 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == '' ]] 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == '' ]] 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:18.255 00:02:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.195 [2024-07-16 00:02:34.363365] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:19.195 [2024-07-16 00:02:34.363384] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:19.195 [2024-07-16 00:02:34.363396] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:19.456 [2024-07-16 00:02:34.492812] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:19.717 [2024-07-16 00:02:34.763360] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:19.717 [2024-07-16 00:02:34.763390] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.717 request: 00:26:19.717 { 00:26:19.717 "name": "nvme", 00:26:19.717 "trtype": "tcp", 00:26:19.717 "traddr": "10.0.0.2", 00:26:19.717 "adrfam": "ipv4", 00:26:19.717 "trsvcid": "8009", 00:26:19.717 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:19.717 "wait_for_attach": true, 00:26:19.717 "method": "bdev_nvme_start_discovery", 00:26:19.717 "req_id": 1 00:26:19.717 } 00:26:19.717 Got JSON-RPC error response 00:26:19.717 response: 00:26:19.717 { 00:26:19.717 "code": -17, 00:26:19.717 "message": "File exists" 00:26:19.717 } 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:19.717 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.978 request: 00:26:19.978 { 00:26:19.978 "name": "nvme_second", 00:26:19.978 "trtype": "tcp", 00:26:19.978 "traddr": "10.0.0.2", 00:26:19.978 "adrfam": "ipv4", 00:26:19.978 "trsvcid": "8009", 00:26:19.978 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:19.978 "wait_for_attach": true, 00:26:19.978 "method": "bdev_nvme_start_discovery", 00:26:19.978 "req_id": 1 00:26:19.978 } 00:26:19.978 Got JSON-RPC error response 00:26:19.978 response: 00:26:19.978 { 00:26:19.978 "code": -17, 00:26:19.978 "message": "File exists" 00:26:19.978 } 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.978 00:02:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:19.978 00:02:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.918 [2024-07-16 00:02:36.030845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.919 [2024-07-16 00:02:36.030874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbcdd0 with addr=10.0.0.2, port=8010 00:26:20.919 [2024-07-16 00:02:36.030887] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:20.919 [2024-07-16 00:02:36.030894] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:20.919 [2024-07-16 00:02:36.030901] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:21.859 [2024-07-16 00:02:37.033282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.859 [2024-07-16 00:02:37.033310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbcdd0 with addr=10.0.0.2, port=8010 00:26:21.859 [2024-07-16 00:02:37.033322] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:21.859 [2024-07-16 00:02:37.033329] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:21.859 [2024-07-16 00:02:37.033336] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:23.242 [2024-07-16 00:02:38.035238] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:23.242 request: 00:26:23.242 { 00:26:23.242 "name": "nvme_second", 00:26:23.242 "trtype": "tcp", 00:26:23.242 "traddr": "10.0.0.2", 00:26:23.242 "adrfam": "ipv4", 00:26:23.242 "trsvcid": "8010", 00:26:23.242 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:23.242 "wait_for_attach": false, 00:26:23.242 "attach_timeout_ms": 3000, 00:26:23.242 "method": "bdev_nvme_start_discovery", 00:26:23.242 "req_id": 1 00:26:23.242 } 00:26:23.242 Got JSON-RPC error response 00:26:23.242 response: 00:26:23.242 { 00:26:23.242 "code": -110, 00:26:23.242 "message": "Connection timed out" 00:26:23.242 } 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.242 00:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 593549 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.243 rmmod nvme_tcp 00:26:23.243 rmmod nvme_fabrics 00:26:23.243 rmmod nvme_keyring 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 593519 ']' 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 593519 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@942 -- # '[' -z 593519 ']' 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # kill -0 593519 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # uname 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 593519 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@960 -- # echo 'killing process with pid 593519' 00:26:23.243 killing process with pid 593519 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@961 -- # kill 593519 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # wait 593519 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.243 00:02:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.784 00:26:25.784 real 0m20.868s 00:26:25.784 user 0m23.423s 00:26:25.784 sys 0m7.669s 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 ************************************ 00:26:25.784 END TEST nvmf_host_discovery 00:26:25.784 ************************************ 00:26:25.784 00:02:40 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:26:25.784 00:02:40 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:25.784 00:02:40 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:26:25.784 00:02:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:25.784 00:02:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 ************************************ 00:26:25.784 START TEST nvmf_host_multipath_status 00:26:25.784 ************************************ 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:25.784 * Looking for test storage... 00:26:25.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.784 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.785 00:02:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:33.920 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:33.920 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:33.920 Found net devices under 0000:31:00.0: cvl_0_0 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:33.920 Found net devices under 0000:31:00.1: cvl_0_1 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.920 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:33.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:26:33.921 00:26:33.921 --- 10.0.0.2 ping statistics --- 00:26:33.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.921 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:26:33.921 00:26:33.921 --- 10.0.0.1 ping statistics --- 00:26:33.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.921 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=600253 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 600253 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 600253 ']' 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.921 00:02:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:33.921 [2024-07-16 00:02:48.897050] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:26:33.921 [2024-07-16 00:02:48.897117] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.921 [2024-07-16 00:02:48.976900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:33.921 [2024-07-16 00:02:49.050266] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:33.921 [2024-07-16 00:02:49.050307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:33.921 [2024-07-16 00:02:49.050314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:33.921 [2024-07-16 00:02:49.050323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:33.921 [2024-07-16 00:02:49.050329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:33.921 [2024-07-16 00:02:49.050396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.921 [2024-07-16 00:02:49.050398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.492 00:02:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:34.492 00:02:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:26:34.492 00:02:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:34.492 00:02:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:34.492 00:02:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:34.753 00:02:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.753 00:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=600253 00:26:34.753 00:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:34.753 [2024-07-16 00:02:49.822337] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.753 00:02:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:35.013 Malloc0 00:26:35.013 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:35.013 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:35.273 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.273 [2024-07-16 00:02:50.458062] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:35.534 [2024-07-16 00:02:50.610422] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=600636 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 600636 /var/tmp/bdevperf.sock 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 600636 ']' 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:35.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:35.534 00:02:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:36.483 00:02:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:36.483 00:02:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:26:36.483 00:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:36.483 00:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:36.744 Nvme0n1 00:26:36.744 00:02:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:37.314 Nvme0n1 00:26:37.314 00:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:37.314 00:02:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:39.228 00:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:39.228 00:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:39.489 00:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:39.489 00:02:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:40.879 00:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:40.879 00:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:40.880 00:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.880 00:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.880 00:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.880 00:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:40.880 00:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.880 00:02:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.880 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.880 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.880 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.880 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:41.141 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.141 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:41.141 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.141 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:41.402 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.402 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:41.402 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:41.402 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.402 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.402 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:41.402 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.402 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.663 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.663 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:41.663 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:41.924 00:02:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:41.924 00:02:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:42.866 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:42.866 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:42.866 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.866 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:43.127 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:43.127 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:43.127 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.127 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:43.388 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.388 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:43.388 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.388 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:43.388 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.388 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:43.388 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.388 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.649 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.649 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:43.649 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.649 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.910 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.910 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:43.910 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.910 00:02:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.910 00:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.910 00:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:43.910 00:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:44.170 00:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:44.431 00:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:45.373 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:45.373 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:45.373 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.373 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:45.373 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.373 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:45.373 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.373 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:45.633 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:45.633 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:45.633 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.633 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:45.895 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.895 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.895 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.895 00:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.895 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.895 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.895 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.895 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.155 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.156 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:46.156 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.156 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:46.416 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.416 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:46.416 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.416 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:46.677 00:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:47.620 00:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:47.620 00:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:47.620 00:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.620 00:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.881 00:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.881 00:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:47.881 00:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.881 00:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.881 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.881 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:47.881 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.881 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.142 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.142 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.142 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.142 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.402 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.402 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.402 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.402 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:48.402 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.402 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:48.402 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.402 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.662 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.662 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:48.662 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:48.921 00:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:48.921 00:03:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.304 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.564 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.564 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:50.564 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.564 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:50.827 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.827 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:50.827 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.827 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:50.827 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.827 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:50.827 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.827 00:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.087 00:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:51.087 00:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:51.087 00:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:51.087 00:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:51.347 00:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:52.287 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:52.287 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:52.287 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.287 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:52.548 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:52.548 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:52.548 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.548 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:52.548 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.548 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:52.808 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.808 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:52.808 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.808 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:52.808 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.808 00:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.119 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.119 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:53.119 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.119 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:53.119 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.119 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:53.119 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.119 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:53.378 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.378 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:53.637 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:53.637 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:53.637 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:53.897 00:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:54.943 00:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:54.943 00:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:54.943 00:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.943 00:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:54.943 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.943 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:54.943 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.943 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:55.214 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.214 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:55.214 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.214 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:55.474 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.474 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:55.474 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.474 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:55.474 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.474 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:55.474 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.474 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:55.735 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.735 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:55.735 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.735 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:55.996 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.996 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:55.996 00:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:55.996 00:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:56.256 00:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:57.198 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:57.198 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:57.198 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.198 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:57.458 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.458 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:57.458 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.458 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:57.458 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.458 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:57.458 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.458 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:57.718 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.718 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:57.718 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:57.718 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.979 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.979 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:57.979 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.979 00:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:57.979 00:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.979 00:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:57.979 00:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.979 00:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:58.240 00:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.240 00:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:58.240 00:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:58.499 00:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:58.499 00:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:59.441 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:59.441 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:59.441 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.441 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:59.702 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.702 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:59.702 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.702 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:59.963 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.963 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:59.963 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.963 00:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:59.963 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.963 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:59.963 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.963 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:00.223 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.223 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:00.223 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.223 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:00.483 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.483 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:00.483 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.483 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:00.483 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.483 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:00.483 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:00.743 00:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:01.004 00:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:01.943 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:01.944 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:01.944 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.944 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:02.204 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.204 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:02.204 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.204 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:02.204 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.204 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:02.204 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.204 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:02.464 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.464 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:02.464 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.464 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:02.725 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.725 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:02.725 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.725 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:02.725 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.725 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:02.725 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.725 00:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 600636 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 600636 ']' 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 600636 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 600636 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 600636' 00:27:02.987 killing process with pid 600636 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 600636 00:27:02.987 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 600636 00:27:02.987 Connection closed with partial response: 00:27:02.987 00:27:02.987 00:27:03.251 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 600636 00:27:03.251 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:03.251 [2024-07-16 00:02:50.672897] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:27:03.251 [2024-07-16 00:02:50.672955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600636 ] 00:27:03.251 [2024-07-16 00:02:50.729496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.252 [2024-07-16 00:02:50.781266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.252 Running I/O for 90 seconds... 00:27:03.252 [2024-07-16 00:03:03.878979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.252 [2024-07-16 00:03:03.879013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.252 [2024-07-16 00:03:03.879052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.252 [2024-07-16 00:03:03.879068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.252 [2024-07-16 00:03:03.879083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.252 [2024-07-16 00:03:03.879098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.252 [2024-07-16 00:03:03.879113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.879984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.879988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.880000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.880004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.880016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.880021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.880034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.880040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.880051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.880056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.880068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.880073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.880084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.880089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.880100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.880105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:03.252 [2024-07-16 00:03:03.880117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.252 [2024-07-16 00:03:03.880122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-16 00:03:03.880715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.253 [2024-07-16 00:03:03.880733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.880984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.880998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.881004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:03.253 [2024-07-16 00:03:03.881019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.253 [2024-07-16 00:03:03.881023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.881990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.881995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.254 [2024-07-16 00:03:03.882601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:03.254 [2024-07-16 00:03:03.882618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:03.882623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:03.882640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:03.882646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:03.882663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:03.882668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:03.882704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:03.882710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-16 00:03:15.988936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.988977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.988982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.989090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.989098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.989109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-16 00:03:15.989114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.989125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-16 00:03:15.989130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.989140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-16 00:03:15.989145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.989156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.255 [2024-07-16 00:03:15.989163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:03.255 [2024-07-16 00:03:15.989173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:03.255 [2024-07-16 00:03:15.989178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:03.255 Received shutdown signal, test time was about 25.666396 seconds 00:27:03.255 00:27:03.255 Latency(us) 00:27:03.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.255 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:03.255 Verification LBA range: start 0x0 length 0x4000 00:27:03.255 Nvme0n1 : 25.67 11095.43 43.34 0.00 0.00 11519.09 266.24 3019898.88 00:27:03.255 =================================================================================================================== 00:27:03.255 Total : 11095.43 43.34 0.00 0.00 11519.09 266.24 3019898.88 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.255 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.255 rmmod nvme_tcp 00:27:03.255 rmmod nvme_fabrics 00:27:03.255 rmmod nvme_keyring 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 600253 ']' 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 600253 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 600253 ']' 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 600253 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 600253 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 600253' 00:27:03.517 killing process with pid 600253 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 600253 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 600253 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.517 00:03:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.063 00:03:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.063 00:27:06.063 real 0m40.237s 00:27:06.063 user 1m41.418s 00:27:06.063 sys 0m11.428s 00:27:06.063 00:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:06.063 00:03:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.063 ************************************ 00:27:06.063 END TEST nvmf_host_multipath_status 00:27:06.063 ************************************ 00:27:06.063 00:03:20 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:27:06.063 00:03:20 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:06.063 00:03:20 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:27:06.063 00:03:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:06.063 00:03:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.063 ************************************ 00:27:06.063 START TEST nvmf_discovery_remove_ifc 00:27:06.063 ************************************ 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:06.063 * Looking for test storage... 00:27:06.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.063 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.064 00:03:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:14.205 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:14.205 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:14.205 Found net devices under 0000:31:00.0: cvl_0_0 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:14.205 Found net devices under 0000:31:00.1: cvl_0_1 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:14.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:27:14.205 00:27:14.205 --- 10.0.0.2 ping statistics --- 00:27:14.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.205 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.486 ms 00:27:14.205 00:27:14.205 --- 10.0.0.1 ping statistics --- 00:27:14.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.205 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.205 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=611242 00:27:14.206 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 611242 00:27:14.206 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:14.206 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@823 -- # '[' -z 611242 ']' 00:27:14.206 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.206 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:14.206 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.206 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:14.206 00:03:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.206 [2024-07-16 00:03:28.982463] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:27:14.206 [2024-07-16 00:03:28.982513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.206 [2024-07-16 00:03:29.073638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.206 [2024-07-16 00:03:29.167270] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.206 [2024-07-16 00:03:29.167336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.206 [2024-07-16 00:03:29.167344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.206 [2024-07-16 00:03:29.167352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.206 [2024-07-16 00:03:29.167357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.206 [2024-07-16 00:03:29.167391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # return 0 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.778 [2024-07-16 00:03:29.830194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:14.778 [2024-07-16 00:03:29.838474] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:14.778 null0 00:27:14.778 [2024-07-16 00:03:29.870374] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:14.778 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=611571 00:27:14.779 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 611571 /tmp/host.sock 00:27:14.779 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:14.779 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@823 -- # '[' -z 611571 ']' 00:27:14.779 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # local rpc_addr=/tmp/host.sock 00:27:14.779 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:14.779 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:14.779 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:14.779 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:14.779 00:03:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.779 [2024-07-16 00:03:29.952641] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:27:14.779 [2024-07-16 00:03:29.952707] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611571 ] 00:27:15.039 [2024-07-16 00:03:30.026454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.039 [2024-07-16 00:03:30.106227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # return 0 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:15.610 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.871 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:15.871 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:15.871 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:15.871 00:03:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.813 [2024-07-16 00:03:31.819110] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:16.813 [2024-07-16 00:03:31.819132] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:16.813 [2024-07-16 00:03:31.819145] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:16.813 [2024-07-16 00:03:31.949564] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:17.074 [2024-07-16 00:03:32.133455] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:17.074 [2024-07-16 00:03:32.133504] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:17.074 [2024-07-16 00:03:32.133527] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:17.074 [2024-07-16 00:03:32.133542] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:17.074 [2024-07-16 00:03:32.133562] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:17.074 [2024-07-16 00:03:32.178974] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c1e500 was disconnected and freed. delete nvme_qpair. 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:17.074 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:17.335 00:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:18.288 00:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:19.679 00:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:20.621 00:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:21.566 00:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:22.509 [2024-07-16 00:03:37.573857] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:22.509 [2024-07-16 00:03:37.573897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.509 [2024-07-16 00:03:37.573909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.509 [2024-07-16 00:03:37.573919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.509 [2024-07-16 00:03:37.573926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.509 [2024-07-16 00:03:37.573934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.509 [2024-07-16 00:03:37.573941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.509 [2024-07-16 00:03:37.573949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.509 [2024-07-16 00:03:37.573956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.509 [2024-07-16 00:03:37.573964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.509 [2024-07-16 00:03:37.573971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.509 [2024-07-16 00:03:37.573978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be50a0 is same with the state(5) to be set 00:27:22.509 [2024-07-16 00:03:37.583879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be50a0 (9): Bad file descriptor 00:27:22.509 [2024-07-16 00:03:37.593918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:22.509 00:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:22.509 00:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:22.509 00:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.509 00:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:22.509 00:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:22.509 00:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.509 00:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:23.452 [2024-07-16 00:03:38.641314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:23.452 [2024-07-16 00:03:38.641358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be50a0 with addr=10.0.0.2, port=4420 00:27:23.452 [2024-07-16 00:03:38.641372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be50a0 is same with the state(5) to be set 00:27:23.452 [2024-07-16 00:03:38.641399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be50a0 (9): Bad file descriptor 00:27:23.452 [2024-07-16 00:03:38.641773] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:23.452 [2024-07-16 00:03:38.641796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:23.452 [2024-07-16 00:03:38.641803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:23.452 [2024-07-16 00:03:38.641812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:23.452 [2024-07-16 00:03:38.641828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:23.452 [2024-07-16 00:03:38.641836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:23.712 00:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:23.713 00:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:23.713 00:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:24.656 [2024-07-16 00:03:39.644210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:24.656 [2024-07-16 00:03:39.644234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:24.656 [2024-07-16 00:03:39.644241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:24.656 [2024-07-16 00:03:39.644248] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:24.656 [2024-07-16 00:03:39.644260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:24.656 [2024-07-16 00:03:39.644279] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:24.656 [2024-07-16 00:03:39.644300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.656 [2024-07-16 00:03:39.644310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.656 [2024-07-16 00:03:39.644321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.656 [2024-07-16 00:03:39.644328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.656 [2024-07-16 00:03:39.644336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.656 [2024-07-16 00:03:39.644342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.656 [2024-07-16 00:03:39.644350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.656 [2024-07-16 00:03:39.644357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.656 [2024-07-16 00:03:39.644365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:24.656 [2024-07-16 00:03:39.644372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:24.656 [2024-07-16 00:03:39.644379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:24.656 [2024-07-16 00:03:39.644862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be4520 (9): Bad file descriptor 00:27:24.656 [2024-07-16 00:03:39.645875] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:24.656 [2024-07-16 00:03:39.645887] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.656 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:24.917 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:24.917 00:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:25.856 00:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.795 [2024-07-16 00:03:41.705474] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:26.795 [2024-07-16 00:03:41.705495] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:26.795 [2024-07-16 00:03:41.705508] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:26.795 [2024-07-16 00:03:41.833891] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:26.795 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.795 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.795 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.795 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:26.795 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.796 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.796 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.796 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:27.057 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:27.057 00:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.057 [2024-07-16 00:03:42.017124] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:27.057 [2024-07-16 00:03:42.017166] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:27.057 [2024-07-16 00:03:42.017187] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:27.057 [2024-07-16 00:03:42.017201] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:27.057 [2024-07-16 00:03:42.017208] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:27.057 [2024-07-16 00:03:42.022439] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c28040 was disconnected and freed. delete nvme_qpair. 00:27:28.000 00:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 611571 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@942 -- # '[' -z 611571 ']' 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # kill -0 611571 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # uname 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 611571 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 611571' 00:27:28.000 killing process with pid 611571 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill 611571 00:27:28.000 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # wait 611571 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.263 rmmod nvme_tcp 00:27:28.263 rmmod nvme_fabrics 00:27:28.263 rmmod nvme_keyring 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 611242 ']' 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 611242 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@942 -- # '[' -z 611242 ']' 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # kill -0 611242 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # uname 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 611242 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 611242' 00:27:28.263 killing process with pid 611242 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill 611242 00:27:28.263 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # wait 611242 00:27:28.524 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:28.524 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:28.524 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:28.524 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:28.524 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:28.524 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.524 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.524 00:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.433 00:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:30.433 00:27:30.433 real 0m24.717s 00:27:30.433 user 0m29.461s 00:27:30.433 sys 0m7.271s 00:27:30.433 00:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:30.433 00:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.433 ************************************ 00:27:30.433 END TEST nvmf_discovery_remove_ifc 00:27:30.433 ************************************ 00:27:30.433 00:03:45 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:27:30.433 00:03:45 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:30.433 00:03:45 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:27:30.433 00:03:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:30.433 00:03:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:30.433 ************************************ 00:27:30.433 START TEST nvmf_identify_kernel_target 00:27:30.433 ************************************ 00:27:30.433 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:30.693 * Looking for test storage... 00:27:30.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:30.693 00:03:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:38.913 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:38.913 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:38.913 Found net devices under 0000:31:00.0: cvl_0_0 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.913 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:38.914 Found net devices under 0000:31:00.1: cvl_0_1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:27:38.914 00:27:38.914 --- 10.0.0.2 ping statistics --- 00:27:38.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.914 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:27:38.914 00:27:38.914 --- 10.0.0.1 ping statistics --- 00:27:38.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.914 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:38.914 00:03:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:38.914 00:03:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:38.914 00:03:54 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:43.120 Waiting for block devices as requested 00:27:43.120 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:43.120 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.120 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.120 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.120 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.120 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:43.380 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:43.380 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:43.380 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:43.642 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:43.642 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:43.642 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:43.642 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:43.904 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:43.904 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:43.904 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:43.904 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:44.166 No valid GPT data, bailing 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:44.166 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:44.167 00:27:44.167 Discovery Log Number of Records 2, Generation counter 2 00:27:44.167 =====Discovery Log Entry 0====== 00:27:44.167 trtype: tcp 00:27:44.167 adrfam: ipv4 00:27:44.167 subtype: current discovery subsystem 00:27:44.167 treq: not specified, sq flow control disable supported 00:27:44.167 portid: 1 00:27:44.167 trsvcid: 4420 00:27:44.167 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:44.167 traddr: 10.0.0.1 00:27:44.167 eflags: none 00:27:44.167 sectype: none 00:27:44.167 =====Discovery Log Entry 1====== 00:27:44.167 trtype: tcp 00:27:44.167 adrfam: ipv4 00:27:44.167 subtype: nvme subsystem 00:27:44.167 treq: not specified, sq flow control disable supported 00:27:44.167 portid: 1 00:27:44.167 trsvcid: 4420 00:27:44.167 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:44.167 traddr: 10.0.0.1 00:27:44.167 eflags: none 00:27:44.167 sectype: none 00:27:44.167 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:44.167 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:44.167 ===================================================== 00:27:44.167 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:44.167 ===================================================== 00:27:44.167 Controller Capabilities/Features 00:27:44.167 ================================ 00:27:44.167 Vendor ID: 0000 00:27:44.167 Subsystem Vendor ID: 0000 00:27:44.167 Serial Number: 7f6312eec65356c4e538 00:27:44.167 Model Number: Linux 00:27:44.167 Firmware Version: 6.7.0-68 00:27:44.167 Recommended Arb Burst: 0 00:27:44.167 IEEE OUI Identifier: 00 00 00 00:27:44.167 Multi-path I/O 00:27:44.167 May have multiple subsystem ports: No 00:27:44.167 May have multiple controllers: No 00:27:44.167 Associated with SR-IOV VF: No 00:27:44.167 Max Data Transfer Size: Unlimited 00:27:44.167 Max Number of Namespaces: 0 00:27:44.167 Max Number of I/O Queues: 1024 00:27:44.167 NVMe Specification Version (VS): 1.3 00:27:44.167 NVMe Specification Version (Identify): 1.3 00:27:44.167 Maximum Queue Entries: 1024 00:27:44.167 Contiguous Queues Required: No 00:27:44.167 Arbitration Mechanisms Supported 00:27:44.167 Weighted Round Robin: Not Supported 00:27:44.167 Vendor Specific: Not Supported 00:27:44.167 Reset Timeout: 7500 ms 00:27:44.167 Doorbell Stride: 4 bytes 00:27:44.167 NVM Subsystem Reset: Not Supported 00:27:44.167 Command Sets Supported 00:27:44.167 NVM Command Set: Supported 00:27:44.167 Boot Partition: Not Supported 00:27:44.167 Memory Page Size Minimum: 4096 bytes 00:27:44.167 Memory Page Size Maximum: 4096 bytes 00:27:44.167 Persistent Memory Region: Not Supported 00:27:44.167 Optional Asynchronous Events Supported 00:27:44.167 Namespace Attribute Notices: Not Supported 00:27:44.167 Firmware Activation Notices: Not Supported 00:27:44.167 ANA Change Notices: Not Supported 00:27:44.167 PLE Aggregate Log Change Notices: Not Supported 00:27:44.167 LBA Status Info Alert Notices: Not Supported 00:27:44.167 EGE Aggregate Log Change Notices: Not Supported 00:27:44.167 Normal NVM Subsystem Shutdown event: Not Supported 00:27:44.167 Zone Descriptor Change Notices: Not Supported 00:27:44.167 Discovery Log Change Notices: Supported 00:27:44.167 Controller Attributes 00:27:44.167 128-bit Host Identifier: Not Supported 00:27:44.167 Non-Operational Permissive Mode: Not Supported 00:27:44.167 NVM Sets: Not Supported 00:27:44.167 Read Recovery Levels: Not Supported 00:27:44.167 Endurance Groups: Not Supported 00:27:44.167 Predictable Latency Mode: Not Supported 00:27:44.167 Traffic Based Keep ALive: Not Supported 00:27:44.167 Namespace Granularity: Not Supported 00:27:44.167 SQ Associations: Not Supported 00:27:44.167 UUID List: Not Supported 00:27:44.167 Multi-Domain Subsystem: Not Supported 00:27:44.167 Fixed Capacity Management: Not Supported 00:27:44.167 Variable Capacity Management: Not Supported 00:27:44.167 Delete Endurance Group: Not Supported 00:27:44.167 Delete NVM Set: Not Supported 00:27:44.167 Extended LBA Formats Supported: Not Supported 00:27:44.167 Flexible Data Placement Supported: Not Supported 00:27:44.167 00:27:44.167 Controller Memory Buffer Support 00:27:44.167 ================================ 00:27:44.167 Supported: No 00:27:44.167 00:27:44.167 Persistent Memory Region Support 00:27:44.167 ================================ 00:27:44.167 Supported: No 00:27:44.167 00:27:44.167 Admin Command Set Attributes 00:27:44.167 ============================ 00:27:44.167 Security Send/Receive: Not Supported 00:27:44.167 Format NVM: Not Supported 00:27:44.167 Firmware Activate/Download: Not Supported 00:27:44.167 Namespace Management: Not Supported 00:27:44.167 Device Self-Test: Not Supported 00:27:44.167 Directives: Not Supported 00:27:44.167 NVMe-MI: Not Supported 00:27:44.167 Virtualization Management: Not Supported 00:27:44.167 Doorbell Buffer Config: Not Supported 00:27:44.167 Get LBA Status Capability: Not Supported 00:27:44.167 Command & Feature Lockdown Capability: Not Supported 00:27:44.167 Abort Command Limit: 1 00:27:44.167 Async Event Request Limit: 1 00:27:44.167 Number of Firmware Slots: N/A 00:27:44.167 Firmware Slot 1 Read-Only: N/A 00:27:44.167 Firmware Activation Without Reset: N/A 00:27:44.167 Multiple Update Detection Support: N/A 00:27:44.167 Firmware Update Granularity: No Information Provided 00:27:44.167 Per-Namespace SMART Log: No 00:27:44.167 Asymmetric Namespace Access Log Page: Not Supported 00:27:44.167 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:44.167 Command Effects Log Page: Not Supported 00:27:44.167 Get Log Page Extended Data: Supported 00:27:44.167 Telemetry Log Pages: Not Supported 00:27:44.167 Persistent Event Log Pages: Not Supported 00:27:44.167 Supported Log Pages Log Page: May Support 00:27:44.167 Commands Supported & Effects Log Page: Not Supported 00:27:44.167 Feature Identifiers & Effects Log Page:May Support 00:27:44.167 NVMe-MI Commands & Effects Log Page: May Support 00:27:44.167 Data Area 4 for Telemetry Log: Not Supported 00:27:44.167 Error Log Page Entries Supported: 1 00:27:44.167 Keep Alive: Not Supported 00:27:44.167 00:27:44.167 NVM Command Set Attributes 00:27:44.167 ========================== 00:27:44.167 Submission Queue Entry Size 00:27:44.167 Max: 1 00:27:44.167 Min: 1 00:27:44.167 Completion Queue Entry Size 00:27:44.167 Max: 1 00:27:44.167 Min: 1 00:27:44.167 Number of Namespaces: 0 00:27:44.167 Compare Command: Not Supported 00:27:44.167 Write Uncorrectable Command: Not Supported 00:27:44.167 Dataset Management Command: Not Supported 00:27:44.167 Write Zeroes Command: Not Supported 00:27:44.167 Set Features Save Field: Not Supported 00:27:44.167 Reservations: Not Supported 00:27:44.167 Timestamp: Not Supported 00:27:44.167 Copy: Not Supported 00:27:44.167 Volatile Write Cache: Not Present 00:27:44.167 Atomic Write Unit (Normal): 1 00:27:44.167 Atomic Write Unit (PFail): 1 00:27:44.167 Atomic Compare & Write Unit: 1 00:27:44.167 Fused Compare & Write: Not Supported 00:27:44.167 Scatter-Gather List 00:27:44.167 SGL Command Set: Supported 00:27:44.167 SGL Keyed: Not Supported 00:27:44.167 SGL Bit Bucket Descriptor: Not Supported 00:27:44.167 SGL Metadata Pointer: Not Supported 00:27:44.167 Oversized SGL: Not Supported 00:27:44.167 SGL Metadata Address: Not Supported 00:27:44.167 SGL Offset: Supported 00:27:44.167 Transport SGL Data Block: Not Supported 00:27:44.167 Replay Protected Memory Block: Not Supported 00:27:44.167 00:27:44.167 Firmware Slot Information 00:27:44.167 ========================= 00:27:44.167 Active slot: 0 00:27:44.167 00:27:44.167 00:27:44.167 Error Log 00:27:44.167 ========= 00:27:44.167 00:27:44.167 Active Namespaces 00:27:44.167 ================= 00:27:44.167 Discovery Log Page 00:27:44.167 ================== 00:27:44.167 Generation Counter: 2 00:27:44.167 Number of Records: 2 00:27:44.167 Record Format: 0 00:27:44.167 00:27:44.167 Discovery Log Entry 0 00:27:44.167 ---------------------- 00:27:44.167 Transport Type: 3 (TCP) 00:27:44.167 Address Family: 1 (IPv4) 00:27:44.168 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:44.168 Entry Flags: 00:27:44.168 Duplicate Returned Information: 0 00:27:44.168 Explicit Persistent Connection Support for Discovery: 0 00:27:44.168 Transport Requirements: 00:27:44.168 Secure Channel: Not Specified 00:27:44.168 Port ID: 1 (0x0001) 00:27:44.168 Controller ID: 65535 (0xffff) 00:27:44.168 Admin Max SQ Size: 32 00:27:44.168 Transport Service Identifier: 4420 00:27:44.168 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:44.168 Transport Address: 10.0.0.1 00:27:44.168 Discovery Log Entry 1 00:27:44.168 ---------------------- 00:27:44.168 Transport Type: 3 (TCP) 00:27:44.168 Address Family: 1 (IPv4) 00:27:44.168 Subsystem Type: 2 (NVM Subsystem) 00:27:44.168 Entry Flags: 00:27:44.168 Duplicate Returned Information: 0 00:27:44.168 Explicit Persistent Connection Support for Discovery: 0 00:27:44.168 Transport Requirements: 00:27:44.168 Secure Channel: Not Specified 00:27:44.168 Port ID: 1 (0x0001) 00:27:44.168 Controller ID: 65535 (0xffff) 00:27:44.168 Admin Max SQ Size: 32 00:27:44.168 Transport Service Identifier: 4420 00:27:44.168 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:44.168 Transport Address: 10.0.0.1 00:27:44.168 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:44.168 get_feature(0x01) failed 00:27:44.168 get_feature(0x02) failed 00:27:44.168 get_feature(0x04) failed 00:27:44.168 ===================================================== 00:27:44.168 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:44.168 ===================================================== 00:27:44.168 Controller Capabilities/Features 00:27:44.168 ================================ 00:27:44.168 Vendor ID: 0000 00:27:44.168 Subsystem Vendor ID: 0000 00:27:44.168 Serial Number: 63e591d277591bf15103 00:27:44.168 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:44.168 Firmware Version: 6.7.0-68 00:27:44.168 Recommended Arb Burst: 6 00:27:44.168 IEEE OUI Identifier: 00 00 00 00:27:44.168 Multi-path I/O 00:27:44.168 May have multiple subsystem ports: Yes 00:27:44.168 May have multiple controllers: Yes 00:27:44.168 Associated with SR-IOV VF: No 00:27:44.168 Max Data Transfer Size: Unlimited 00:27:44.168 Max Number of Namespaces: 1024 00:27:44.168 Max Number of I/O Queues: 128 00:27:44.168 NVMe Specification Version (VS): 1.3 00:27:44.168 NVMe Specification Version (Identify): 1.3 00:27:44.168 Maximum Queue Entries: 1024 00:27:44.168 Contiguous Queues Required: No 00:27:44.168 Arbitration Mechanisms Supported 00:27:44.168 Weighted Round Robin: Not Supported 00:27:44.168 Vendor Specific: Not Supported 00:27:44.168 Reset Timeout: 7500 ms 00:27:44.168 Doorbell Stride: 4 bytes 00:27:44.168 NVM Subsystem Reset: Not Supported 00:27:44.168 Command Sets Supported 00:27:44.168 NVM Command Set: Supported 00:27:44.168 Boot Partition: Not Supported 00:27:44.168 Memory Page Size Minimum: 4096 bytes 00:27:44.168 Memory Page Size Maximum: 4096 bytes 00:27:44.168 Persistent Memory Region: Not Supported 00:27:44.168 Optional Asynchronous Events Supported 00:27:44.168 Namespace Attribute Notices: Supported 00:27:44.168 Firmware Activation Notices: Not Supported 00:27:44.168 ANA Change Notices: Supported 00:27:44.168 PLE Aggregate Log Change Notices: Not Supported 00:27:44.168 LBA Status Info Alert Notices: Not Supported 00:27:44.168 EGE Aggregate Log Change Notices: Not Supported 00:27:44.168 Normal NVM Subsystem Shutdown event: Not Supported 00:27:44.168 Zone Descriptor Change Notices: Not Supported 00:27:44.168 Discovery Log Change Notices: Not Supported 00:27:44.168 Controller Attributes 00:27:44.168 128-bit Host Identifier: Supported 00:27:44.168 Non-Operational Permissive Mode: Not Supported 00:27:44.168 NVM Sets: Not Supported 00:27:44.168 Read Recovery Levels: Not Supported 00:27:44.168 Endurance Groups: Not Supported 00:27:44.168 Predictable Latency Mode: Not Supported 00:27:44.168 Traffic Based Keep ALive: Supported 00:27:44.168 Namespace Granularity: Not Supported 00:27:44.168 SQ Associations: Not Supported 00:27:44.168 UUID List: Not Supported 00:27:44.168 Multi-Domain Subsystem: Not Supported 00:27:44.168 Fixed Capacity Management: Not Supported 00:27:44.168 Variable Capacity Management: Not Supported 00:27:44.168 Delete Endurance Group: Not Supported 00:27:44.168 Delete NVM Set: Not Supported 00:27:44.168 Extended LBA Formats Supported: Not Supported 00:27:44.168 Flexible Data Placement Supported: Not Supported 00:27:44.168 00:27:44.168 Controller Memory Buffer Support 00:27:44.168 ================================ 00:27:44.168 Supported: No 00:27:44.168 00:27:44.168 Persistent Memory Region Support 00:27:44.168 ================================ 00:27:44.168 Supported: No 00:27:44.168 00:27:44.168 Admin Command Set Attributes 00:27:44.168 ============================ 00:27:44.168 Security Send/Receive: Not Supported 00:27:44.168 Format NVM: Not Supported 00:27:44.168 Firmware Activate/Download: Not Supported 00:27:44.168 Namespace Management: Not Supported 00:27:44.168 Device Self-Test: Not Supported 00:27:44.168 Directives: Not Supported 00:27:44.168 NVMe-MI: Not Supported 00:27:44.168 Virtualization Management: Not Supported 00:27:44.168 Doorbell Buffer Config: Not Supported 00:27:44.168 Get LBA Status Capability: Not Supported 00:27:44.168 Command & Feature Lockdown Capability: Not Supported 00:27:44.168 Abort Command Limit: 4 00:27:44.168 Async Event Request Limit: 4 00:27:44.168 Number of Firmware Slots: N/A 00:27:44.168 Firmware Slot 1 Read-Only: N/A 00:27:44.168 Firmware Activation Without Reset: N/A 00:27:44.168 Multiple Update Detection Support: N/A 00:27:44.168 Firmware Update Granularity: No Information Provided 00:27:44.168 Per-Namespace SMART Log: Yes 00:27:44.168 Asymmetric Namespace Access Log Page: Supported 00:27:44.168 ANA Transition Time : 10 sec 00:27:44.168 00:27:44.168 Asymmetric Namespace Access Capabilities 00:27:44.168 ANA Optimized State : Supported 00:27:44.168 ANA Non-Optimized State : Supported 00:27:44.168 ANA Inaccessible State : Supported 00:27:44.168 ANA Persistent Loss State : Supported 00:27:44.168 ANA Change State : Supported 00:27:44.168 ANAGRPID is not changed : No 00:27:44.168 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:44.168 00:27:44.168 ANA Group Identifier Maximum : 128 00:27:44.168 Number of ANA Group Identifiers : 128 00:27:44.168 Max Number of Allowed Namespaces : 1024 00:27:44.168 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:44.168 Command Effects Log Page: Supported 00:27:44.168 Get Log Page Extended Data: Supported 00:27:44.168 Telemetry Log Pages: Not Supported 00:27:44.168 Persistent Event Log Pages: Not Supported 00:27:44.168 Supported Log Pages Log Page: May Support 00:27:44.168 Commands Supported & Effects Log Page: Not Supported 00:27:44.168 Feature Identifiers & Effects Log Page:May Support 00:27:44.168 NVMe-MI Commands & Effects Log Page: May Support 00:27:44.168 Data Area 4 for Telemetry Log: Not Supported 00:27:44.168 Error Log Page Entries Supported: 128 00:27:44.168 Keep Alive: Supported 00:27:44.168 Keep Alive Granularity: 1000 ms 00:27:44.168 00:27:44.168 NVM Command Set Attributes 00:27:44.168 ========================== 00:27:44.168 Submission Queue Entry Size 00:27:44.168 Max: 64 00:27:44.168 Min: 64 00:27:44.168 Completion Queue Entry Size 00:27:44.168 Max: 16 00:27:44.168 Min: 16 00:27:44.168 Number of Namespaces: 1024 00:27:44.168 Compare Command: Not Supported 00:27:44.168 Write Uncorrectable Command: Not Supported 00:27:44.168 Dataset Management Command: Supported 00:27:44.168 Write Zeroes Command: Supported 00:27:44.168 Set Features Save Field: Not Supported 00:27:44.168 Reservations: Not Supported 00:27:44.168 Timestamp: Not Supported 00:27:44.168 Copy: Not Supported 00:27:44.168 Volatile Write Cache: Present 00:27:44.168 Atomic Write Unit (Normal): 1 00:27:44.168 Atomic Write Unit (PFail): 1 00:27:44.168 Atomic Compare & Write Unit: 1 00:27:44.168 Fused Compare & Write: Not Supported 00:27:44.168 Scatter-Gather List 00:27:44.168 SGL Command Set: Supported 00:27:44.168 SGL Keyed: Not Supported 00:27:44.168 SGL Bit Bucket Descriptor: Not Supported 00:27:44.168 SGL Metadata Pointer: Not Supported 00:27:44.168 Oversized SGL: Not Supported 00:27:44.168 SGL Metadata Address: Not Supported 00:27:44.168 SGL Offset: Supported 00:27:44.168 Transport SGL Data Block: Not Supported 00:27:44.168 Replay Protected Memory Block: Not Supported 00:27:44.168 00:27:44.168 Firmware Slot Information 00:27:44.168 ========================= 00:27:44.168 Active slot: 0 00:27:44.168 00:27:44.168 Asymmetric Namespace Access 00:27:44.168 =========================== 00:27:44.168 Change Count : 0 00:27:44.168 Number of ANA Group Descriptors : 1 00:27:44.168 ANA Group Descriptor : 0 00:27:44.168 ANA Group ID : 1 00:27:44.168 Number of NSID Values : 1 00:27:44.168 Change Count : 0 00:27:44.169 ANA State : 1 00:27:44.169 Namespace Identifier : 1 00:27:44.169 00:27:44.169 Commands Supported and Effects 00:27:44.169 ============================== 00:27:44.169 Admin Commands 00:27:44.169 -------------- 00:27:44.169 Get Log Page (02h): Supported 00:27:44.169 Identify (06h): Supported 00:27:44.169 Abort (08h): Supported 00:27:44.169 Set Features (09h): Supported 00:27:44.169 Get Features (0Ah): Supported 00:27:44.169 Asynchronous Event Request (0Ch): Supported 00:27:44.169 Keep Alive (18h): Supported 00:27:44.169 I/O Commands 00:27:44.169 ------------ 00:27:44.169 Flush (00h): Supported 00:27:44.169 Write (01h): Supported LBA-Change 00:27:44.169 Read (02h): Supported 00:27:44.169 Write Zeroes (08h): Supported LBA-Change 00:27:44.169 Dataset Management (09h): Supported 00:27:44.169 00:27:44.169 Error Log 00:27:44.169 ========= 00:27:44.169 Entry: 0 00:27:44.169 Error Count: 0x3 00:27:44.169 Submission Queue Id: 0x0 00:27:44.169 Command Id: 0x5 00:27:44.169 Phase Bit: 0 00:27:44.169 Status Code: 0x2 00:27:44.169 Status Code Type: 0x0 00:27:44.169 Do Not Retry: 1 00:27:44.430 Error Location: 0x28 00:27:44.430 LBA: 0x0 00:27:44.430 Namespace: 0x0 00:27:44.430 Vendor Log Page: 0x0 00:27:44.430 ----------- 00:27:44.430 Entry: 1 00:27:44.430 Error Count: 0x2 00:27:44.430 Submission Queue Id: 0x0 00:27:44.430 Command Id: 0x5 00:27:44.430 Phase Bit: 0 00:27:44.430 Status Code: 0x2 00:27:44.430 Status Code Type: 0x0 00:27:44.430 Do Not Retry: 1 00:27:44.430 Error Location: 0x28 00:27:44.430 LBA: 0x0 00:27:44.430 Namespace: 0x0 00:27:44.430 Vendor Log Page: 0x0 00:27:44.430 ----------- 00:27:44.430 Entry: 2 00:27:44.430 Error Count: 0x1 00:27:44.430 Submission Queue Id: 0x0 00:27:44.430 Command Id: 0x4 00:27:44.430 Phase Bit: 0 00:27:44.430 Status Code: 0x2 00:27:44.430 Status Code Type: 0x0 00:27:44.430 Do Not Retry: 1 00:27:44.430 Error Location: 0x28 00:27:44.430 LBA: 0x0 00:27:44.430 Namespace: 0x0 00:27:44.430 Vendor Log Page: 0x0 00:27:44.430 00:27:44.430 Number of Queues 00:27:44.430 ================ 00:27:44.430 Number of I/O Submission Queues: 128 00:27:44.430 Number of I/O Completion Queues: 128 00:27:44.430 00:27:44.430 ZNS Specific Controller Data 00:27:44.430 ============================ 00:27:44.430 Zone Append Size Limit: 0 00:27:44.430 00:27:44.430 00:27:44.430 Active Namespaces 00:27:44.430 ================= 00:27:44.430 get_feature(0x05) failed 00:27:44.430 Namespace ID:1 00:27:44.430 Command Set Identifier: NVM (00h) 00:27:44.430 Deallocate: Supported 00:27:44.430 Deallocated/Unwritten Error: Not Supported 00:27:44.430 Deallocated Read Value: Unknown 00:27:44.430 Deallocate in Write Zeroes: Not Supported 00:27:44.430 Deallocated Guard Field: 0xFFFF 00:27:44.430 Flush: Supported 00:27:44.430 Reservation: Not Supported 00:27:44.430 Namespace Sharing Capabilities: Multiple Controllers 00:27:44.430 Size (in LBAs): 3750748848 (1788GiB) 00:27:44.430 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:44.430 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:44.430 UUID: 0f11ed1a-e3b3-4664-acf9-29a45c47635c 00:27:44.430 Thin Provisioning: Not Supported 00:27:44.430 Per-NS Atomic Units: Yes 00:27:44.430 Atomic Write Unit (Normal): 8 00:27:44.431 Atomic Write Unit (PFail): 8 00:27:44.431 Preferred Write Granularity: 8 00:27:44.431 Atomic Compare & Write Unit: 8 00:27:44.431 Atomic Boundary Size (Normal): 0 00:27:44.431 Atomic Boundary Size (PFail): 0 00:27:44.431 Atomic Boundary Offset: 0 00:27:44.431 NGUID/EUI64 Never Reused: No 00:27:44.431 ANA group ID: 1 00:27:44.431 Namespace Write Protected: No 00:27:44.431 Number of LBA Formats: 1 00:27:44.431 Current LBA Format: LBA Format #00 00:27:44.431 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:44.431 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.431 rmmod nvme_tcp 00:27:44.431 rmmod nvme_fabrics 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.431 00:03:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:46.342 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:46.655 00:04:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:50.856 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:50.856 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:50.856 00:27:50.856 real 0m19.941s 00:27:50.856 user 0m5.316s 00:27:50.856 sys 0m11.752s 00:27:50.856 00:04:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:50.856 00:04:05 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:50.856 ************************************ 00:27:50.856 END TEST nvmf_identify_kernel_target 00:27:50.856 ************************************ 00:27:50.856 00:04:05 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:27:50.856 00:04:05 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:50.856 00:04:05 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:27:50.856 00:04:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:50.856 00:04:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.856 ************************************ 00:27:50.856 START TEST nvmf_auth_host 00:27:50.856 ************************************ 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:50.857 * Looking for test storage... 00:27:50.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.857 00:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:59.000 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:59.000 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:59.000 Found net devices under 0000:31:00.0: cvl_0_0 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:59.000 Found net devices under 0000:31:00.1: cvl_0_1 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.000 00:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.000 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.000 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:59.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:27:59.000 00:27:59.000 --- 10.0.0.2 ping statistics --- 00:27:59.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.001 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:27:59.001 00:27:59.001 --- 10.0.0.1 ping statistics --- 00:27:59.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.001 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=627225 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 627225 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 627225 ']' 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:59.001 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=61b197abaf939c405bde635e39f61a43 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fuI 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 61b197abaf939c405bde635e39f61a43 0 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 61b197abaf939c405bde635e39f61a43 0 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=61b197abaf939c405bde635e39f61a43 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fuI 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fuI 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fuI 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.941 00:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:59.941 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:59.941 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:59.941 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:59.941 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af9781dda95038b383f28d1326b2f649c2562ce17f3a52ca090490e6b1a48ab7 00:27:59.941 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gLO 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af9781dda95038b383f28d1326b2f649c2562ce17f3a52ca090490e6b1a48ab7 3 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af9781dda95038b383f28d1326b2f649c2562ce17f3a52ca090490e6b1a48ab7 3 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af9781dda95038b383f28d1326b2f649c2562ce17f3a52ca090490e6b1a48ab7 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gLO 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gLO 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gLO 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=264bdde7d039c01325bafb6a2422aef55682d8c9d45d774d 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MFC 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 264bdde7d039c01325bafb6a2422aef55682d8c9d45d774d 0 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 264bdde7d039c01325bafb6a2422aef55682d8c9d45d774d 0 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=264bdde7d039c01325bafb6a2422aef55682d8c9d45d774d 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MFC 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MFC 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.MFC 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:59.942 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2e641ba9bb0196bb6dee48a1accd9e594f30a6145702084e 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KCY 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2e641ba9bb0196bb6dee48a1accd9e594f30a6145702084e 2 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2e641ba9bb0196bb6dee48a1accd9e594f30a6145702084e 2 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2e641ba9bb0196bb6dee48a1accd9e594f30a6145702084e 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KCY 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KCY 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.KCY 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:00.201 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=778540dd4f63e013208e7634a817db88 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iSF 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 778540dd4f63e013208e7634a817db88 1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 778540dd4f63e013208e7634a817db88 1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=778540dd4f63e013208e7634a817db88 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iSF 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iSF 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.iSF 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e4fc67d150fc0e5a20bf044323c3975 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Mu1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e4fc67d150fc0e5a20bf044323c3975 1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e4fc67d150fc0e5a20bf044323c3975 1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e4fc67d150fc0e5a20bf044323c3975 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Mu1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Mu1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Mu1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=39d5659ff083dbcf9771198d47d0b23c2ec1ec6f7da4f758 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.m0k 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 39d5659ff083dbcf9771198d47d0b23c2ec1ec6f7da4f758 2 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 39d5659ff083dbcf9771198d47d0b23c2ec1ec6f7da4f758 2 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=39d5659ff083dbcf9771198d47d0b23c2ec1ec6f7da4f758 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.m0k 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.m0k 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.m0k 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=631bb88419b249eaeb20d7fca3edec63 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ar2 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 631bb88419b249eaeb20d7fca3edec63 0 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 631bb88419b249eaeb20d7fca3edec63 0 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=631bb88419b249eaeb20d7fca3edec63 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:00.202 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ar2 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ar2 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ar2 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=22ecf8c339afecc44e6fb1aea07771cc0bd1f5eca16000400128c2b01db968b0 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.09L 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 22ecf8c339afecc44e6fb1aea07771cc0bd1f5eca16000400128c2b01db968b0 3 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 22ecf8c339afecc44e6fb1aea07771cc0bd1f5eca16000400128c2b01db968b0 3 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=22ecf8c339afecc44e6fb1aea07771cc0bd1f5eca16000400128c2b01db968b0 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.09L 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.09L 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.09L 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 627225 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 627225 ']' 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:28:00.462 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fuI 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gLO ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gLO 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.MFC 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.KCY ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KCY 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.iSF 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Mu1 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Mu1 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.m0k 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ar2 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ar2 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.09L 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:00.723 00:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:04.929 Waiting for block devices as requested 00:28:04.929 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:04.929 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:04.929 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:04.929 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:04.929 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:04.929 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:04.929 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:04.929 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:04.929 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:05.189 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:05.189 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:05.189 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:05.189 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:05.449 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:05.449 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:05.449 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:05.709 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:06.281 No valid GPT data, bailing 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:06.281 00:28:06.281 Discovery Log Number of Records 2, Generation counter 2 00:28:06.281 =====Discovery Log Entry 0====== 00:28:06.281 trtype: tcp 00:28:06.281 adrfam: ipv4 00:28:06.281 subtype: current discovery subsystem 00:28:06.281 treq: not specified, sq flow control disable supported 00:28:06.281 portid: 1 00:28:06.281 trsvcid: 4420 00:28:06.281 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:06.281 traddr: 10.0.0.1 00:28:06.281 eflags: none 00:28:06.281 sectype: none 00:28:06.281 =====Discovery Log Entry 1====== 00:28:06.281 trtype: tcp 00:28:06.281 adrfam: ipv4 00:28:06.281 subtype: nvme subsystem 00:28:06.281 treq: not specified, sq flow control disable supported 00:28:06.281 portid: 1 00:28:06.281 trsvcid: 4420 00:28:06.281 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:06.281 traddr: 10.0.0.1 00:28:06.281 eflags: none 00:28:06.281 sectype: none 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.281 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.282 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.541 nvme0n1 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.541 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.542 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.802 nvme0n1 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:06.802 00:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.062 nvme0n1 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.062 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.063 nvme0n1 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.063 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.323 nvme0n1 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.323 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.584 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.585 nvme0n1 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.585 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.846 nvme0n1 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.846 00:04:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.846 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.106 nvme0n1 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.106 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.365 nvme0n1 00:28:08.365 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.365 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.365 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.365 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.365 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.365 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.365 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.366 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.625 nvme0n1 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.625 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.885 nvme0n1 00:28:08.885 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.885 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.885 00:04:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.885 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.885 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.885 00:04:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.885 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.145 nvme0n1 00:28:09.145 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.145 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.145 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.145 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.145 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.145 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.405 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.666 nvme0n1 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.666 00:04:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.932 nvme0n1 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:09.932 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.192 nvme0n1 00:28:10.192 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.192 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.192 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.192 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.192 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.192 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.452 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.713 nvme0n1 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:10.713 00:04:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.286 nvme0n1 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.286 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.858 nvme0n1 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.858 00:04:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.859 00:04:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.859 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.859 00:04:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.118 nvme0n1 00:28:12.118 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:12.118 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.118 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:12.118 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.118 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:12.378 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:12.379 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.639 nvme0n1 00:28:12.639 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:12.639 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.639 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.639 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:12.899 00:04:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.469 nvme0n1 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:13.469 00:04:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.038 nvme0n1 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:14.038 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:14.297 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.865 nvme0n1 00:28:14.865 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:14.865 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.865 00:04:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.866 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:14.866 00:04:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.866 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:14.866 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.866 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.866 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:14.866 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:15.129 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.698 nvme0n1 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:15.698 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:15.959 00:04:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.541 nvme0n1 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:16.541 00:04:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.489 nvme0n1 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.489 nvme0n1 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.489 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.828 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.829 nvme0n1 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:17.829 00:04:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.133 nvme0n1 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.133 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.134 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.134 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.134 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.134 nvme0n1 00:28:18.134 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.134 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.134 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.134 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.134 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.134 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.394 nvme0n1 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.394 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.655 nvme0n1 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:18.655 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.916 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.917 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.917 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.917 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.917 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.917 00:04:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.917 00:04:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.917 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.917 00:04:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.917 nvme0n1 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.917 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.178 nvme0n1 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.178 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.439 nvme0n1 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.439 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.440 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.701 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.702 nvme0n1 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.702 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:19.963 00:04:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.224 nvme0n1 00:28:20.224 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.224 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.224 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.224 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.224 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.225 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.486 nvme0n1 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.486 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.747 nvme0n1 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.747 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.008 00:04:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.268 nvme0n1 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.268 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.529 nvme0n1 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:21.529 00:04:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.100 nvme0n1 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:22.100 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.669 nvme0n1 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.669 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:22.670 00:04:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.239 nvme0n1 00:28:23.239 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.239 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.239 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.239 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.239 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.240 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.499 nvme0n1 00:28:23.499 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.499 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.499 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.499 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.499 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.760 00:04:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.020 nvme0n1 00:28:24.020 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:24.020 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.020 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.020 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:24.020 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:24.281 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.853 nvme0n1 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.853 00:04:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:24.853 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.795 nvme0n1 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:25.795 00:04:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.365 nvme0n1 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.366 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:26.627 00:04:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.199 nvme0n1 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:27.199 00:04:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.148 nvme0n1 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.148 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.149 nvme0n1 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.149 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.409 nvme0n1 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.409 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.669 nvme0n1 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.669 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.929 nvme0n1 00:28:28.929 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.929 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.929 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.929 00:04:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.929 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.929 00:04:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.929 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:28.930 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.189 nvme0n1 00:28:29.189 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.189 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.189 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.189 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.189 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.189 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.189 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.190 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.451 nvme0n1 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.451 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.711 nvme0n1 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.712 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.973 nvme0n1 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.973 00:04:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:29.973 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.234 nvme0n1 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.234 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.495 nvme0n1 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.495 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.755 nvme0n1 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:30.756 00:04:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.015 nvme0n1 00:28:31.015 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.015 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.015 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.015 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.015 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.015 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:31.274 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.275 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.535 nvme0n1 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.535 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.795 nvme0n1 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:31.795 00:04:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.055 nvme0n1 00:28:32.055 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:32.055 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.055 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.055 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:32.055 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.055 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:32.315 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.574 nvme0n1 00:28:32.574 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:32.574 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.574 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:32.574 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.574 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:32.834 00:04:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.426 nvme0n1 00:28:33.426 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:33.426 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.426 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:33.426 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:33.427 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.687 nvme0n1 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:33.687 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:33.947 00:04:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.207 nvme0n1 00:28:34.207 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:34.207 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.207 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.207 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:34.207 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.207 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:34.207 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.208 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.208 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:34.208 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.467 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:34.468 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.728 nvme0n1 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:34.728 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjFiMTk3YWJhZjkzOWM0MDViZGU2MzVlMzlmNjFhNDMs4nri: 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: ]] 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWY5NzgxZGRhOTUwMzhiMzgzZjI4ZDEzMjZiMmY2NDljMjU2MmNlMTdmM2E1MmNhMDkwNDkwZTZiMWE0OGFiN5UvutM=: 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.988 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:34.989 00:04:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.559 nvme0n1 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:35.559 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:35.819 00:04:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.391 nvme0n1 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4NTQwZGQ0ZjYzZTAxMzIwOGU3NjM0YTgxN2RiODgHLoQZ: 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: ]] 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmU0ZmM2N2QxNTBmYzBlNWEyMGJmMDQ0MzIzYzM5NzU6BVzq: 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.391 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.652 00:04:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.234 nvme0n1 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzlkNTY1OWZmMDgzZGJjZjk3NzExOThkNDdkMGIyM2MyZWMxZWM2ZjdkYTRmNzU4jiUJWQ==: 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: ]] 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjMxYmI4ODQxOWIyNDllYWViMjBkN2ZjYTNlZGVjNjPpGfuP: 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:37.234 00:04:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.176 nvme0n1 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjJlY2Y4YzMzOWFmZWNjNDRlNmZiMWFlYTA3NzcxY2MwYmQxZjVlY2ExNjAwMDQwMDEyOGMyYjAxZGI5NjhiMMef2o0=: 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:38.176 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.749 nvme0n1 00:28:38.750 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:38.750 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.750 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.750 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:38.750 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.750 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjY0YmRkZTdkMDM5YzAxMzI1YmFmYjZhMjQyMmFlZjU1NjgyZDhjOWQ0NWQ3NzRkSZIh0A==: 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: ]] 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmU2NDFiYTliYjAxOTZiYjZkZWU0OGExYWNjZDllNTk0ZjMwYTYxNDU3MDIwODRl017HaQ==: 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:39.011 00:04:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.011 request: 00:28:39.011 { 00:28:39.011 "name": "nvme0", 00:28:39.011 "trtype": "tcp", 00:28:39.011 "traddr": "10.0.0.1", 00:28:39.011 "adrfam": "ipv4", 00:28:39.011 "trsvcid": "4420", 00:28:39.011 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:39.011 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:39.011 "prchk_reftag": false, 00:28:39.011 "prchk_guard": false, 00:28:39.011 "hdgst": false, 00:28:39.011 "ddgst": false, 00:28:39.011 "method": "bdev_nvme_attach_controller", 00:28:39.011 "req_id": 1 00:28:39.011 } 00:28:39.011 Got JSON-RPC error response 00:28:39.011 response: 00:28:39.011 { 00:28:39.011 "code": -5, 00:28:39.011 "message": "Input/output error" 00:28:39.011 } 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:28:39.011 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.012 request: 00:28:39.012 { 00:28:39.012 "name": "nvme0", 00:28:39.012 "trtype": "tcp", 00:28:39.012 "traddr": "10.0.0.1", 00:28:39.012 "adrfam": "ipv4", 00:28:39.012 "trsvcid": "4420", 00:28:39.012 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:39.012 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:39.012 "prchk_reftag": false, 00:28:39.012 "prchk_guard": false, 00:28:39.012 "hdgst": false, 00:28:39.012 "ddgst": false, 00:28:39.012 "dhchap_key": "key2", 00:28:39.012 "method": "bdev_nvme_attach_controller", 00:28:39.012 "req_id": 1 00:28:39.012 } 00:28:39.012 Got JSON-RPC error response 00:28:39.012 response: 00:28:39.012 { 00:28:39.012 "code": -5, 00:28:39.012 "message": "Input/output error" 00:28:39.012 } 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:39.012 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.273 request: 00:28:39.273 { 00:28:39.273 "name": "nvme0", 00:28:39.273 "trtype": "tcp", 00:28:39.273 "traddr": "10.0.0.1", 00:28:39.273 "adrfam": "ipv4", 00:28:39.273 "trsvcid": "4420", 00:28:39.273 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:39.273 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:39.273 "prchk_reftag": false, 00:28:39.273 "prchk_guard": false, 00:28:39.273 "hdgst": false, 00:28:39.273 "ddgst": false, 00:28:39.273 "dhchap_key": "key1", 00:28:39.273 "dhchap_ctrlr_key": "ckey2", 00:28:39.273 "method": "bdev_nvme_attach_controller", 00:28:39.273 "req_id": 1 00:28:39.273 } 00:28:39.273 Got JSON-RPC error response 00:28:39.273 response: 00:28:39.273 { 00:28:39.273 "code": -5, 00:28:39.273 "message": "Input/output error" 00:28:39.273 } 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.273 rmmod nvme_tcp 00:28:39.273 rmmod nvme_fabrics 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 627225 ']' 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 627225 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@942 -- # '[' -z 627225 ']' 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # kill -0 627225 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # uname 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 627225 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 627225' 00:28:39.273 killing process with pid 627225 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@961 -- # kill 627225 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # wait 627225 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.273 00:04:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:41.820 00:04:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:46.024 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:46.024 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:46.024 00:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fuI /tmp/spdk.key-null.MFC /tmp/spdk.key-sha256.iSF /tmp/spdk.key-sha384.m0k /tmp/spdk.key-sha512.09L /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:46.024 00:05:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:49.327 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:49.327 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:49.327 00:28:49.327 real 0m58.860s 00:28:49.327 user 0m51.797s 00:28:49.327 sys 0m16.135s 00:28:49.327 00:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:28:49.327 00:05:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.327 ************************************ 00:28:49.327 END TEST nvmf_auth_host 00:28:49.327 ************************************ 00:28:49.587 00:05:04 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:28:49.587 00:05:04 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:49.587 00:05:04 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:49.587 00:05:04 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:28:49.587 00:05:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:49.587 00:05:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:49.587 ************************************ 00:28:49.587 START TEST nvmf_digest 00:28:49.587 ************************************ 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:49.587 * Looking for test storage... 00:28:49.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:49.587 00:05:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:49.588 00:05:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:57.873 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.873 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:57.873 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:57.874 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:57.874 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:57.874 Found net devices under 0000:31:00.0: cvl_0_0 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:57.874 Found net devices under 0000:31:00.1: cvl_0_1 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:57.874 00:05:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:57.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.743 ms 00:28:57.874 00:28:57.874 --- 10.0.0.2 ping statistics --- 00:28:57.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.874 rtt min/avg/max/mdev = 0.743/0.743/0.743/0.000 ms 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:28:57.874 00:28:57.874 --- 10.0.0.1 ping statistics --- 00:28:57.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.874 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:57.874 00:05:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:58.136 ************************************ 00:28:58.136 START TEST nvmf_digest_clean 00:28:58.136 ************************************ 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1117 -- # run_digest 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=644599 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 644599 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 644599 ']' 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:28:58.136 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.136 [2024-07-16 00:05:13.186132] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:28:58.136 [2024-07-16 00:05:13.186194] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.136 [2024-07-16 00:05:13.265643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.397 [2024-07-16 00:05:13.338480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.397 [2024-07-16 00:05:13.338522] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.397 [2024-07-16 00:05:13.338529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.397 [2024-07-16 00:05:13.338536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.397 [2024-07-16 00:05:13.338542] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.397 [2024-07-16 00:05:13.338559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:58.966 00:05:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.966 null0 00:28:58.966 [2024-07-16 00:05:14.061203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.966 [2024-07-16 00:05:14.085373] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=644693 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 644693 /var/tmp/bperf.sock 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 644693 ']' 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:28:58.966 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.966 [2024-07-16 00:05:14.138596] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:28:58.966 [2024-07-16 00:05:14.138643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid644693 ] 00:28:59.226 [2024-07-16 00:05:14.218965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.227 [2024-07-16 00:05:14.283223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.798 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:28:59.798 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:28:59.798 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:59.798 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:59.798 00:05:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:00.058 00:05:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.058 00:05:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.318 nvme0n1 00:29:00.318 00:05:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:00.318 00:05:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:00.578 Running I/O for 2 seconds... 00:29:02.493 00:29:02.493 Latency(us) 00:29:02.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.493 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:02.493 nvme0n1 : 2.00 20786.85 81.20 0.00 0.00 6150.42 3153.92 18786.99 00:29:02.493 =================================================================================================================== 00:29:02.493 Total : 20786.85 81.20 0.00 0.00 6150.42 3153.92 18786.99 00:29:02.493 0 00:29:02.493 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:02.493 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:02.493 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:02.493 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:02.493 | select(.opcode=="crc32c") 00:29:02.493 | "\(.module_name) \(.executed)"' 00:29:02.493 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 644693 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 644693 ']' 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 644693 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 644693 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 644693' 00:29:02.754 killing process with pid 644693 00:29:02.754 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 644693 00:29:02.754 Received shutdown signal, test time was about 2.000000 seconds 00:29:02.754 00:29:02.755 Latency(us) 00:29:02.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.755 =================================================================================================================== 00:29:02.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 644693 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=645381 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 645381 /var/tmp/bperf.sock 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 645381 ']' 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:02.755 00:05:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:03.016 [2024-07-16 00:05:17.945871] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:03.016 [2024-07-16 00:05:17.945948] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645381 ] 00:29:03.016 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:03.016 Zero copy mechanism will not be used. 00:29:03.016 [2024-07-16 00:05:18.026911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.016 [2024-07-16 00:05:18.080214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.589 00:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:03.589 00:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:29:03.589 00:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:03.589 00:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:03.589 00:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:03.849 00:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.849 00:05:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.109 nvme0n1 00:29:04.109 00:05:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:04.109 00:05:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:04.109 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:04.109 Zero copy mechanism will not be used. 00:29:04.109 Running I/O for 2 seconds... 00:29:06.650 00:29:06.650 Latency(us) 00:29:06.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.650 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:06.650 nvme0n1 : 2.00 2912.98 364.12 0.00 0.00 5490.22 3235.84 12506.45 00:29:06.650 =================================================================================================================== 00:29:06.650 Total : 2912.98 364.12 0.00 0.00 5490.22 3235.84 12506.45 00:29:06.650 0 00:29:06.650 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:06.650 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:06.650 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:06.651 | select(.opcode=="crc32c") 00:29:06.651 | "\(.module_name) \(.executed)"' 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 645381 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 645381 ']' 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 645381 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 645381 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 645381' 00:29:06.651 killing process with pid 645381 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 645381 00:29:06.651 Received shutdown signal, test time was about 2.000000 seconds 00:29:06.651 00:29:06.651 Latency(us) 00:29:06.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.651 =================================================================================================================== 00:29:06.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 645381 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=646144 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 646144 /var/tmp/bperf.sock 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 646144 ']' 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:06.651 00:05:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:06.651 [2024-07-16 00:05:21.671386] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:06.651 [2024-07-16 00:05:21.671446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646144 ] 00:29:06.651 [2024-07-16 00:05:21.752250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.651 [2024-07-16 00:05:21.806019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.591 00:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:07.591 00:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:29:07.591 00:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:07.591 00:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:07.591 00:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:07.592 00:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.592 00:05:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.851 nvme0n1 00:29:07.851 00:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:07.851 00:05:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.112 Running I/O for 2 seconds... 00:29:10.026 00:29:10.026 Latency(us) 00:29:10.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.026 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.026 nvme0n1 : 2.01 21884.69 85.49 0.00 0.00 5840.99 2211.84 14199.47 00:29:10.026 =================================================================================================================== 00:29:10.026 Total : 21884.69 85.49 0.00 0.00 5840.99 2211.84 14199.47 00:29:10.026 0 00:29:10.026 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:10.026 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:10.026 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:10.026 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:10.026 | select(.opcode=="crc32c") 00:29:10.026 | "\(.module_name) \(.executed)"' 00:29:10.026 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 646144 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 646144 ']' 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 646144 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 646144 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 646144' 00:29:10.288 killing process with pid 646144 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 646144 00:29:10.288 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.288 00:29:10.288 Latency(us) 00:29:10.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.288 =================================================================================================================== 00:29:10.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 646144 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=646976 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 646976 /var/tmp/bperf.sock 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 646976 ']' 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:10.288 00:05:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:10.548 [2024-07-16 00:05:25.511676] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:10.548 [2024-07-16 00:05:25.511733] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646976 ] 00:29:10.548 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.548 Zero copy mechanism will not be used. 00:29:10.548 [2024-07-16 00:05:25.593244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.548 [2024-07-16 00:05:25.647255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.118 00:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:11.118 00:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:29:11.118 00:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:11.119 00:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:11.119 00:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:11.378 00:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.378 00:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.638 nvme0n1 00:29:11.638 00:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:11.638 00:05:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.638 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.638 Zero copy mechanism will not be used. 00:29:11.638 Running I/O for 2 seconds... 00:29:14.181 00:29:14.181 Latency(us) 00:29:14.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.181 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:14.181 nvme0n1 : 2.00 3986.65 498.33 0.00 0.00 4008.01 1884.16 14199.47 00:29:14.181 =================================================================================================================== 00:29:14.181 Total : 3986.65 498.33 0.00 0.00 4008.01 1884.16 14199.47 00:29:14.181 0 00:29:14.181 00:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:14.181 00:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:14.181 00:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:14.181 00:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:14.181 | select(.opcode=="crc32c") 00:29:14.181 | "\(.module_name) \(.executed)"' 00:29:14.181 00:05:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:14.181 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:14.181 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:14.181 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 646976 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 646976 ']' 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 646976 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 646976 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 646976' 00:29:14.182 killing process with pid 646976 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 646976 00:29:14.182 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.182 00:29:14.182 Latency(us) 00:29:14.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.182 =================================================================================================================== 00:29:14.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 646976 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 644599 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 644599 ']' 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 644599 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 644599 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 644599' 00:29:14.182 killing process with pid 644599 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 644599 00:29:14.182 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 644599 00:29:14.443 00:29:14.443 real 0m16.251s 00:29:14.443 user 0m31.915s 00:29:14.443 sys 0m3.296s 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.443 ************************************ 00:29:14.443 END TEST nvmf_digest_clean 00:29:14.443 ************************************ 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1136 -- # return 0 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:14.443 ************************************ 00:29:14.443 START TEST nvmf_digest_error 00:29:14.443 ************************************ 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1117 -- # run_digest_error 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=647783 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 647783 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 647783 ']' 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:14.443 00:05:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.443 [2024-07-16 00:05:29.513548] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:14.443 [2024-07-16 00:05:29.513600] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.443 [2024-07-16 00:05:29.588591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.705 [2024-07-16 00:05:29.658094] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.705 [2024-07-16 00:05:29.658134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.705 [2024-07-16 00:05:29.658141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.705 [2024-07-16 00:05:29.658148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.705 [2024-07-16 00:05:29.658153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.705 [2024-07-16 00:05:29.658176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.278 [2024-07-16 00:05:30.316078] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.278 null0 00:29:15.278 [2024-07-16 00:05:30.396929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.278 [2024-07-16 00:05:30.421115] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=647870 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 647870 /var/tmp/bperf.sock 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 647870 ']' 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:15.278 00:05:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.539 [2024-07-16 00:05:30.477024] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:15.539 [2024-07-16 00:05:30.477072] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647870 ] 00:29:15.539 [2024-07-16 00:05:30.556095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.539 [2024-07-16 00:05:30.609989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.110 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:16.110 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:29:16.110 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.110 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.372 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:16.372 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:16.372 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.372 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:16.372 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.372 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.632 nvme0n1 00:29:16.632 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:16.632 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:16.632 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.632 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:16.633 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:16.633 00:05:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.893 Running I/O for 2 seconds... 00:29:16.893 [2024-07-16 00:05:31.875698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.875729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.875742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:31.888946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.888965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.888972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:31.902549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.902568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.902575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:31.915394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.915412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.915419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:31.928760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.928777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.928784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:31.941182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.941199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.941206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:31.953036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.953053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.953060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:31.963275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.963293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.963299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:31.976915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.976932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.976939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:31.989878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:31.989899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:31.989905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:32.003867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:32.003885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:32.003891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:32.016033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:32.016051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:32.016058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:32.028878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:32.028896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:32.028902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:32.039840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:32.039858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:32.039865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:32.052054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:32.052072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:32.052078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:32.066016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:32.066034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:32.066041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.894 [2024-07-16 00:05:32.078317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:16.894 [2024-07-16 00:05:32.078335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.894 [2024-07-16 00:05:32.078343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.090001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.090019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.090029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.102510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.102528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.102535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.115773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.115791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.115797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.127282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.127300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.127306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.138112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.138129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.138136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.151349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.151375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.151382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.165130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.165149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.165155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.177891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.177909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.177916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.190715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.190733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.190739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.203755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.203776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.203783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.214698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.214716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.214722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.226563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.226580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.226587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.240210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.240227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.240239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.253347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.253366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.253372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.266174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.266192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.266198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.278063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.278081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.156 [2024-07-16 00:05:32.278087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.156 [2024-07-16 00:05:32.291024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.156 [2024-07-16 00:05:32.291042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.157 [2024-07-16 00:05:32.291049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.157 [2024-07-16 00:05:32.301467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.157 [2024-07-16 00:05:32.301484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.157 [2024-07-16 00:05:32.301491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.157 [2024-07-16 00:05:32.314162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.157 [2024-07-16 00:05:32.314179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.157 [2024-07-16 00:05:32.314186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.157 [2024-07-16 00:05:32.326328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.157 [2024-07-16 00:05:32.326345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.157 [2024-07-16 00:05:32.326351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.157 [2024-07-16 00:05:32.339310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.157 [2024-07-16 00:05:32.339328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.157 [2024-07-16 00:05:32.339334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.417 [2024-07-16 00:05:32.352151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.417 [2024-07-16 00:05:32.352170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.417 [2024-07-16 00:05:32.352177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.417 [2024-07-16 00:05:32.365345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.417 [2024-07-16 00:05:32.365363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.417 [2024-07-16 00:05:32.365370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.417 [2024-07-16 00:05:32.377600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.417 [2024-07-16 00:05:32.377618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.417 [2024-07-16 00:05:32.377625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.390981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.390999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.391006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.401745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.401762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.401768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.415277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.415296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.415306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.425742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.425760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.425766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.439217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.439238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.439245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.451729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.451747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.451754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.465629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.465646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.465653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.478545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.478562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.478569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.489637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.489654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.489661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.502121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.502139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.502146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.515844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.515861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.515868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.526351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.526374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.526380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.538718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.538735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.538742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.552382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.552400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.552406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.563916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.563934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.563941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.578295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.578313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.578319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.590695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.590712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.590719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.418 [2024-07-16 00:05:32.601029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.418 [2024-07-16 00:05:32.601047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.418 [2024-07-16 00:05:32.601053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.679 [2024-07-16 00:05:32.613430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.679 [2024-07-16 00:05:32.613448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.679 [2024-07-16 00:05:32.613454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.679 [2024-07-16 00:05:32.627200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.679 [2024-07-16 00:05:32.627218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.679 [2024-07-16 00:05:32.627224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.679 [2024-07-16 00:05:32.639718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.679 [2024-07-16 00:05:32.639736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.679 [2024-07-16 00:05:32.639742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.679 [2024-07-16 00:05:32.652872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.679 [2024-07-16 00:05:32.652890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.679 [2024-07-16 00:05:32.652896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.679 [2024-07-16 00:05:32.665227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.679 [2024-07-16 00:05:32.665249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.679 [2024-07-16 00:05:32.665255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.679 [2024-07-16 00:05:32.677584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.679 [2024-07-16 00:05:32.677601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.679 [2024-07-16 00:05:32.677607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.679 [2024-07-16 00:05:32.689368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.679 [2024-07-16 00:05:32.689386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.689393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.702372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.702390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.702396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.715171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.715189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.715197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.726280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.726297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.726304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.738176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.738196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.738203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.751824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.751842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.751848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.765390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.765407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.765413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.775097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.775115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.775121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.787795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.787812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.787818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.800995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.801012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.801018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.814453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.814470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.814477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.827456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.827473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.827480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.840063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.840080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.840087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.849768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.849785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.849791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.680 [2024-07-16 00:05:32.863482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.680 [2024-07-16 00:05:32.863499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.680 [2024-07-16 00:05:32.863506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.876760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.876777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.876784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.890735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.890752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.890758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.904006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.904024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.904030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.913136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.913154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.913160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.926433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.926450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.926456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.939423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.939440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.939447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.951487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.951504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.951514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.964710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.964727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.964733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.977675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.977692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.977698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:32.990222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:32.990243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:32.990249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.001188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.001206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.001212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.013438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.013455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.013462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.027367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.027384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.027390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.040183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.040200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.040207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.050872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.050889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.050896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.063344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.063365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.063371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.075965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.075983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.075989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.088403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.088420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.088427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.101639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.101656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.101663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.114837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.114853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.114860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.942 [2024-07-16 00:05:33.127216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:17.942 [2024-07-16 00:05:33.127236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.942 [2024-07-16 00:05:33.127243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.137835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.137853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.137859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.151670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.151687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.151694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.165624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.165641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.165648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.177841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.177858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.177864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.190400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.190418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.190424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.202578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.202594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.202601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.213249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.213266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.213273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.226814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.226831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.226838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.239261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.239278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.239285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.251792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.251809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.251815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.263263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.263280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.263286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.277875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.277892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.277902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.289818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.289836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.289842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.301816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.301833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.301840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.312424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.205 [2024-07-16 00:05:33.312441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.205 [2024-07-16 00:05:33.312447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.205 [2024-07-16 00:05:33.325801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.206 [2024-07-16 00:05:33.325818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.206 [2024-07-16 00:05:33.325825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.206 [2024-07-16 00:05:33.338305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.206 [2024-07-16 00:05:33.338322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.206 [2024-07-16 00:05:33.338328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.206 [2024-07-16 00:05:33.351296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.206 [2024-07-16 00:05:33.351314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.206 [2024-07-16 00:05:33.351320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.206 [2024-07-16 00:05:33.364417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.206 [2024-07-16 00:05:33.364434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.206 [2024-07-16 00:05:33.364441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.206 [2024-07-16 00:05:33.377915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.206 [2024-07-16 00:05:33.377933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.206 [2024-07-16 00:05:33.377939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.206 [2024-07-16 00:05:33.389206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.206 [2024-07-16 00:05:33.389226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.206 [2024-07-16 00:05:33.389235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.467 [2024-07-16 00:05:33.399566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.467 [2024-07-16 00:05:33.399591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.467 [2024-07-16 00:05:33.399597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.467 [2024-07-16 00:05:33.412379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.467 [2024-07-16 00:05:33.412396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.467 [2024-07-16 00:05:33.412403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.467 [2024-07-16 00:05:33.425973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.467 [2024-07-16 00:05:33.425990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.467 [2024-07-16 00:05:33.425997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.467 [2024-07-16 00:05:33.439246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.467 [2024-07-16 00:05:33.439262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.467 [2024-07-16 00:05:33.439269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.467 [2024-07-16 00:05:33.450773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.467 [2024-07-16 00:05:33.450790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.450797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.462821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.462839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.462846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.475071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.475089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.475095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.488245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.488262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.488272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.500874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.500891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.500897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.514174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.514192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.514199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.527008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.527026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.527032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.538933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.538950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.538956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.551608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.551626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.551632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.562985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.563002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.563008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.573901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.573918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.573925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.586809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.586827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.586833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.600053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.600074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.600081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.613625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.613642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.613648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.626126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.626143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.626149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.636590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.636607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.636613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.468 [2024-07-16 00:05:33.650344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.468 [2024-07-16 00:05:33.650361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.468 [2024-07-16 00:05:33.650368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.662801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.662818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-16 00:05:33.662824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.675684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.675701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-16 00:05:33.675707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.686932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.686949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-16 00:05:33.686955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.698910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.698927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-16 00:05:33.698934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.711119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.711137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-16 00:05:33.711143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.723842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.723859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-16 00:05:33.723865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.737023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.737040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-16 00:05:33.737046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.749374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.749391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-16 00:05:33.749398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.760744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.760762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.729 [2024-07-16 00:05:33.760768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.729 [2024-07-16 00:05:33.772463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.729 [2024-07-16 00:05:33.772481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.730 [2024-07-16 00:05:33.772487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.730 [2024-07-16 00:05:33.785137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.730 [2024-07-16 00:05:33.785154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.730 [2024-07-16 00:05:33.785161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.730 [2024-07-16 00:05:33.797806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.730 [2024-07-16 00:05:33.797823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.730 [2024-07-16 00:05:33.797829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.730 [2024-07-16 00:05:33.811559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.730 [2024-07-16 00:05:33.811577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.730 [2024-07-16 00:05:33.811586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.730 [2024-07-16 00:05:33.824389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.730 [2024-07-16 00:05:33.824407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.730 [2024-07-16 00:05:33.824413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.730 [2024-07-16 00:05:33.836283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.730 [2024-07-16 00:05:33.836301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.730 [2024-07-16 00:05:33.836308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.730 [2024-07-16 00:05:33.847825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.730 [2024-07-16 00:05:33.847842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.730 [2024-07-16 00:05:33.847849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.730 [2024-07-16 00:05:33.859287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x230ac70) 00:29:18.730 [2024-07-16 00:05:33.859305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.730 [2024-07-16 00:05:33.859311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.730 00:29:18.730 Latency(us) 00:29:18.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.730 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:18.730 nvme0n1 : 2.00 20437.90 79.84 0.00 0.00 6257.63 1966.08 16711.68 00:29:18.730 =================================================================================================================== 00:29:18.730 Total : 20437.90 79.84 0.00 0.00 6257.63 1966.08 16711.68 00:29:18.730 0 00:29:18.730 00:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:18.730 00:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:18.730 00:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:18.730 00:05:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:18.730 | .driver_specific 00:29:18.730 | .nvme_error 00:29:18.730 | .status_code 00:29:18.730 | .command_transient_transport_error' 00:29:18.990 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:29:18.990 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 647870 00:29:18.990 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 647870 ']' 00:29:18.990 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 647870 00:29:18.990 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:29:18.990 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:18.990 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 647870 00:29:18.990 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:18.990 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:18.991 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 647870' 00:29:18.991 killing process with pid 647870 00:29:18.991 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 647870 00:29:18.991 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.991 00:29:18.991 Latency(us) 00:29:18.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.991 =================================================================================================================== 00:29:18.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.991 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 647870 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=648663 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 648663 /var/tmp/bperf.sock 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 648663 ']' 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:19.252 00:05:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.252 [2024-07-16 00:05:34.271228] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:19.252 [2024-07-16 00:05:34.271296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648663 ] 00:29:19.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.252 Zero copy mechanism will not be used. 00:29:19.252 [2024-07-16 00:05:34.354179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.252 [2024-07-16 00:05:34.407632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.194 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.456 nvme0n1 00:29:20.456 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:20.456 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:20.456 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.456 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:20.456 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:20.456 00:05:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:20.717 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.717 Zero copy mechanism will not be used. 00:29:20.717 Running I/O for 2 seconds... 00:29:20.717 [2024-07-16 00:05:35.699582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.699615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.699624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.713940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.713964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.713972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.725125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.725145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.725152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.738704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.738722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.738729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.748391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.748410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.748417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.759262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.759281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.759287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.769580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.769599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.769605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.779881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.779899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.779906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.790143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.790162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.790169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.800179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.800198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.800204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.810465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.810483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.810489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.819390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.819409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.819415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.829844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.829863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.829869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.841623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.841641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.841651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.850935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.850955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.850961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.717 [2024-07-16 00:05:35.861280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.717 [2024-07-16 00:05:35.861298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.717 [2024-07-16 00:05:35.861305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.718 [2024-07-16 00:05:35.872817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.718 [2024-07-16 00:05:35.872836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.718 [2024-07-16 00:05:35.872842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.718 [2024-07-16 00:05:35.882584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.718 [2024-07-16 00:05:35.882603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.718 [2024-07-16 00:05:35.882610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.718 [2024-07-16 00:05:35.892066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.718 [2024-07-16 00:05:35.892085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.718 [2024-07-16 00:05:35.892091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.718 [2024-07-16 00:05:35.904150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.718 [2024-07-16 00:05:35.904170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.718 [2024-07-16 00:05:35.904176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:35.914524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:35.914543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:35.914550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:35.926437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:35.926456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:35.926463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:35.936092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:35.936115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:35.936122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:35.947298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:35.947317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:35.947324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:35.957936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:35.957955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:35.957961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:35.967839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:35.967859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:35.967865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:35.976279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:35.976298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:35.976305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:35.986513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:35.986532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:35.986539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:35.996625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:35.996644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:35.996650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.979 [2024-07-16 00:05:36.005706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.979 [2024-07-16 00:05:36.005724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.979 [2024-07-16 00:05:36.005731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.016451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.016471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.016477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.026569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.026589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.026595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.036611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.036630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.036636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.046774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.046793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.046800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.057033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.057052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.057058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.068406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.068425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.068431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.078979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.078998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.079005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.089415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.089434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.089441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.098912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.098931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.098938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.108849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.108868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.108877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.119676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.119695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.119702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.130047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.130067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.130073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.141848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.141867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.141873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.152882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.152900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.152907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.980 [2024-07-16 00:05:36.164062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:20.980 [2024-07-16 00:05:36.164080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.980 [2024-07-16 00:05:36.164087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.175069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.175088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.175094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.184359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.184378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.184384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.193751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.193771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.193777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.205316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.205339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.205345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.215327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.215346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.215352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.225945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.225965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.225972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.235523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.235542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.235549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.245779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.245798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.245805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.257260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.257280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.257286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.266131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.266151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.266157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.274479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.274498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.274504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.284726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.284745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.284752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.295388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.241 [2024-07-16 00:05:36.295407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.241 [2024-07-16 00:05:36.295413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.241 [2024-07-16 00:05:36.305722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.305741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.305747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.315141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.315160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.315167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.325830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.325849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.325855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.336457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.336476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.336482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.348306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.348324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.348331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.359794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.359813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.359820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.368718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.368737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.368743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.378255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.378276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.378282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.389030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.389048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.389055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.399640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.399658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.399665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.410038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.410056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.410062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.421422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.421440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.421447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.242 [2024-07-16 00:05:36.429925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.242 [2024-07-16 00:05:36.429944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.242 [2024-07-16 00:05:36.429951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.438794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.438813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.503 [2024-07-16 00:05:36.438820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.449688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.449707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.503 [2024-07-16 00:05:36.449713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.459594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.459613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.503 [2024-07-16 00:05:36.459619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.470524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.470542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.503 [2024-07-16 00:05:36.470548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.481221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.481245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.503 [2024-07-16 00:05:36.481251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.491786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.491804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.503 [2024-07-16 00:05:36.491811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.501150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.501169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.503 [2024-07-16 00:05:36.501175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.511301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.511320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.503 [2024-07-16 00:05:36.511327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.524396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.524414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.503 [2024-07-16 00:05:36.524420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.503 [2024-07-16 00:05:36.537103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.503 [2024-07-16 00:05:36.537122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.537128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.547572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.547591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.547597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.556656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.556674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.556684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.565440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.565458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.565464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.574935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.574953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.574960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.585234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.585252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.585259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.596129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.596147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.596153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.605938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.605956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.605962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.614917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.614935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.614942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.626936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.626955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.626961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.637130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.637149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.637155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.648301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.648322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.648329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.658731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.658750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.658757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.669694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.669713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.669719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.679891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.679909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.679915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.504 [2024-07-16 00:05:36.690629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.504 [2024-07-16 00:05:36.690648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.504 [2024-07-16 00:05:36.690654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.699450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.699469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.765 [2024-07-16 00:05:36.699476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.709653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.709672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.765 [2024-07-16 00:05:36.709679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.719306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.719325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.765 [2024-07-16 00:05:36.719332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.728137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.728155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.765 [2024-07-16 00:05:36.728162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.738389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.738408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.765 [2024-07-16 00:05:36.738414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.748711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.748730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.765 [2024-07-16 00:05:36.748737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.760076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.760094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.765 [2024-07-16 00:05:36.760101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.768909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.768927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.765 [2024-07-16 00:05:36.768933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.776391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.776409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.765 [2024-07-16 00:05:36.776415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.765 [2024-07-16 00:05:36.786093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.765 [2024-07-16 00:05:36.786111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.786118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.795271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.795290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.795296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.805627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.805646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.805652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.817853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.817871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.817882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.830128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.830147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.830153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.843964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.843983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.843990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.857244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.857263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.857269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.868439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.868457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.868464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.879456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.879475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.879482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.891002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.891021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.891027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.901820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.901839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.901845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.912765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.912784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.912790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.923612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.923637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.923644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.933627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.933645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.933652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.766 [2024-07-16 00:05:36.944643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:21.766 [2024-07-16 00:05:36.944661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.766 [2024-07-16 00:05:36.944667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:36.955931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:36.955951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:36.955958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:36.966859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:36.966878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:36.966885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:36.977163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:36.977182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:36.977188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:36.987500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:36.987519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:36.987526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.000134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.000153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.000159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.012734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.012752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.012762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.023870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.023889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.023895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.035711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.035729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.035735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.046365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.046383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.046390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.055659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.055678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.055684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.066433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.066452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.066458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.077502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.077521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.077527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.089035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.089054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.089060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.100578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.100596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.028 [2024-07-16 00:05:37.100602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.028 [2024-07-16 00:05:37.111643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.028 [2024-07-16 00:05:37.111665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.111671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.029 [2024-07-16 00:05:37.121913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.029 [2024-07-16 00:05:37.121932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.121938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.029 [2024-07-16 00:05:37.132751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.029 [2024-07-16 00:05:37.132770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.132776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.029 [2024-07-16 00:05:37.143762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.029 [2024-07-16 00:05:37.143780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.143786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.029 [2024-07-16 00:05:37.153568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.029 [2024-07-16 00:05:37.153586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.153592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.029 [2024-07-16 00:05:37.164235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.029 [2024-07-16 00:05:37.164253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.164259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.029 [2024-07-16 00:05:37.175060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.029 [2024-07-16 00:05:37.175079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.175085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.029 [2024-07-16 00:05:37.185888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.029 [2024-07-16 00:05:37.185906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.185912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.029 [2024-07-16 00:05:37.195768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.029 [2024-07-16 00:05:37.195786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.195793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.029 [2024-07-16 00:05:37.206950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.029 [2024-07-16 00:05:37.206968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.029 [2024-07-16 00:05:37.206974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.218458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.218477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.218483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.231003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.231021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.231028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.244383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.244402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.244408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.257632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.257651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.257657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.271310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.271329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.271335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.284884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.284903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.284909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.299181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.299200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.299206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.313136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.313155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.313164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.327481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.327500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.327506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.341105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.341123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.341129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.352667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.352686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.352693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.363925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.363943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.363949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.376639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.376658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.376665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.388781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.388800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.388807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.400878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.400897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.400904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.412524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.412543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.291 [2024-07-16 00:05:37.412549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.291 [2024-07-16 00:05:37.424028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.291 [2024-07-16 00:05:37.424050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.292 [2024-07-16 00:05:37.424056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.292 [2024-07-16 00:05:37.434802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.292 [2024-07-16 00:05:37.434821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.292 [2024-07-16 00:05:37.434828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.292 [2024-07-16 00:05:37.445559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.292 [2024-07-16 00:05:37.445578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.292 [2024-07-16 00:05:37.445584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.292 [2024-07-16 00:05:37.456058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.292 [2024-07-16 00:05:37.456077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.292 [2024-07-16 00:05:37.456084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.292 [2024-07-16 00:05:37.466580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.292 [2024-07-16 00:05:37.466600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.292 [2024-07-16 00:05:37.466606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.292 [2024-07-16 00:05:37.478470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.292 [2024-07-16 00:05:37.478489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.292 [2024-07-16 00:05:37.478495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.552 [2024-07-16 00:05:37.489365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.552 [2024-07-16 00:05:37.489384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.552 [2024-07-16 00:05:37.489391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.552 [2024-07-16 00:05:37.500701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.552 [2024-07-16 00:05:37.500720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.552 [2024-07-16 00:05:37.500726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.552 [2024-07-16 00:05:37.510564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.552 [2024-07-16 00:05:37.510583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.552 [2024-07-16 00:05:37.510590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.552 [2024-07-16 00:05:37.520736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.552 [2024-07-16 00:05:37.520755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.552 [2024-07-16 00:05:37.520762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.552 [2024-07-16 00:05:37.531676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.552 [2024-07-16 00:05:37.531695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.552 [2024-07-16 00:05:37.531701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.552 [2024-07-16 00:05:37.543804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.552 [2024-07-16 00:05:37.543823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.552 [2024-07-16 00:05:37.543829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.555362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.555380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.555387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.564985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.565004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.565010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.575705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.575724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.575730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.585787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.585806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.585813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.595883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.595902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.595909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.605940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.605958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.605968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.616011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.616030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.616036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.626202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.626222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.626233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.637121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.637140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.637146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.647775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.647794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.647800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.658222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.658245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.658252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.668929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.668948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.668954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.678827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.678846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.678853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.553 [2024-07-16 00:05:37.688928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x208c0f0) 00:29:22.553 [2024-07-16 00:05:37.688946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.553 [2024-07-16 00:05:37.688953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.553 00:29:22.553 Latency(us) 00:29:22.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.553 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:22.553 nvme0n1 : 2.00 2884.32 360.54 0.00 0.00 5542.56 815.79 14964.05 00:29:22.553 =================================================================================================================== 00:29:22.553 Total : 2884.32 360.54 0.00 0.00 5542.56 815.79 14964.05 00:29:22.553 0 00:29:22.553 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:22.553 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:22.553 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:22.553 | .driver_specific 00:29:22.553 | .nvme_error 00:29:22.553 | .status_code 00:29:22.553 | .command_transient_transport_error' 00:29:22.553 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 186 > 0 )) 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 648663 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 648663 ']' 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 648663 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 648663 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 648663' 00:29:22.814 killing process with pid 648663 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 648663 00:29:22.814 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.814 00:29:22.814 Latency(us) 00:29:22.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.814 =================================================================================================================== 00:29:22.814 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.814 00:05:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 648663 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=649468 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 649468 /var/tmp/bperf.sock 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 649468 ']' 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:23.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:23.076 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.076 [2024-07-16 00:05:38.092653] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:23.076 [2024-07-16 00:05:38.092711] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid649468 ] 00:29:23.076 [2024-07-16 00:05:38.172541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.076 [2024-07-16 00:05:38.225864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.018 00:05:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.279 nvme0n1 00:29:24.279 00:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:24.279 00:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:24.279 00:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.279 00:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:24.279 00:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:24.279 00:05:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:24.279 Running I/O for 2 seconds... 00:29:24.279 [2024-07-16 00:05:39.461142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190de8a8 00:29:24.279 [2024-07-16 00:05:39.461918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.279 [2024-07-16 00:05:39.461948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:24.540 [2024-07-16 00:05:39.473239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190de038 00:29:24.540 [2024-07-16 00:05:39.473981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.540 [2024-07-16 00:05:39.474004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.490037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190de038 00:29:24.541 [2024-07-16 00:05:39.492065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.492083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.502089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190de8a8 00:29:24.541 [2024-07-16 00:05:39.504080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.504097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.514156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fda78 00:29:24.541 [2024-07-16 00:05:39.516129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.516147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.526329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fe2e8 00:29:24.541 [2024-07-16 00:05:39.528271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.528288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.538338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fef90 00:29:24.541 [2024-07-16 00:05:39.540262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.540278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.550355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190feb58 00:29:24.541 [2024-07-16 00:05:39.552258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.552274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.562381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fd208 00:29:24.541 [2024-07-16 00:05:39.564262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.564279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.574396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fc998 00:29:24.541 [2024-07-16 00:05:39.576255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.576271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.586410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fc128 00:29:24.541 [2024-07-16 00:05:39.588254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.588271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.598433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fb8b8 00:29:24.541 [2024-07-16 00:05:39.600253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.600270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.610424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fb048 00:29:24.541 [2024-07-16 00:05:39.612228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.612248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.622428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fa7d8 00:29:24.541 [2024-07-16 00:05:39.624208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.624223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.634495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f9f68 00:29:24.541 [2024-07-16 00:05:39.636255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.636271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.646521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f96f8 00:29:24.541 [2024-07-16 00:05:39.648262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.648278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.658538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f8e88 00:29:24.541 [2024-07-16 00:05:39.660256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.660272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.670538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f8618 00:29:24.541 [2024-07-16 00:05:39.672240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.672257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.682556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f7da8 00:29:24.541 [2024-07-16 00:05:39.684241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.684257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.694608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f7538 00:29:24.541 [2024-07-16 00:05:39.696270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.696286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.706642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f6cc8 00:29:24.541 [2024-07-16 00:05:39.708284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.541 [2024-07-16 00:05:39.708301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:24.541 [2024-07-16 00:05:39.718671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f6458 00:29:24.542 [2024-07-16 00:05:39.720293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.542 [2024-07-16 00:05:39.720309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.730721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f5be8 00:29:24.803 [2024-07-16 00:05:39.732325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.732341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.742774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f5378 00:29:24.803 [2024-07-16 00:05:39.744351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.744367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.754785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f4b08 00:29:24.803 [2024-07-16 00:05:39.756342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.756357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.766815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f4298 00:29:24.803 [2024-07-16 00:05:39.768355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.768373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.778830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f3a28 00:29:24.803 [2024-07-16 00:05:39.780349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.780366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.790857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f31b8 00:29:24.803 [2024-07-16 00:05:39.792357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.792377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.802883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f2948 00:29:24.803 [2024-07-16 00:05:39.804362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.804379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.814911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f20d8 00:29:24.803 [2024-07-16 00:05:39.816367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.816383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.825441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f1868 00:29:24.803 [2024-07-16 00:05:39.826239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.826255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.836670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f0ff8 00:29:24.803 [2024-07-16 00:05:39.837449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.837465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.848734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f0788 00:29:24.803 [2024-07-16 00:05:39.849494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.803 [2024-07-16 00:05:39.849510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:24.803 [2024-07-16 00:05:39.860759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190eff18 00:29:24.804 [2024-07-16 00:05:39.861497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.861512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.877459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ef6a8 00:29:24.804 [2024-07-16 00:05:39.879474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.879491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.887969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190eff18 00:29:24.804 [2024-07-16 00:05:39.889344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.889361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.899190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f0788 00:29:24.804 [2024-07-16 00:05:39.900543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.900559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.911237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f0ff8 00:29:24.804 [2024-07-16 00:05:39.912558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.912574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.923262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f1868 00:29:24.804 [2024-07-16 00:05:39.924566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.924581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.935248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f20d8 00:29:24.804 [2024-07-16 00:05:39.936528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.936544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.949663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190edd58 00:29:24.804 [2024-07-16 00:05:39.951599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.951614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.961683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ed4e8 00:29:24.804 [2024-07-16 00:05:39.963587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.963602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.973699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ecc78 00:29:24.804 [2024-07-16 00:05:39.975591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.975607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:24.804 [2024-07-16 00:05:39.985705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ec408 00:29:24.804 [2024-07-16 00:05:39.987566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.804 [2024-07-16 00:05:39.987582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:25.065 [2024-07-16 00:05:39.997702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ebb98 00:29:25.066 [2024-07-16 00:05:39.999547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:39.999563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.010244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190eb328 00:29:25.066 [2024-07-16 00:05:40.012070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.012087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.022278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190eaab8 00:29:25.066 [2024-07-16 00:05:40.024075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.024090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.034309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ea248 00:29:25.066 [2024-07-16 00:05:40.036089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.036105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.046325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e99d8 00:29:25.066 [2024-07-16 00:05:40.048138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.048155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.058447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e9168 00:29:25.066 [2024-07-16 00:05:40.060186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.060202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.070495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e88f8 00:29:25.066 [2024-07-16 00:05:40.072216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.072235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.082478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e8088 00:29:25.066 [2024-07-16 00:05:40.084176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.084192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.094495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e7818 00:29:25.066 [2024-07-16 00:05:40.096173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.096188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.106475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e6fa8 00:29:25.066 [2024-07-16 00:05:40.108130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.108149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.118473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e6738 00:29:25.066 [2024-07-16 00:05:40.120106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.120122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.130477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e5ec8 00:29:25.066 [2024-07-16 00:05:40.132087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.132102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.142687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e5658 00:29:25.066 [2024-07-16 00:05:40.144285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.144302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.154688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e4de8 00:29:25.066 [2024-07-16 00:05:40.156264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.156280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.166714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e4578 00:29:25.066 [2024-07-16 00:05:40.168269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.168285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.178766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e3d08 00:29:25.066 [2024-07-16 00:05:40.180305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.180320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.190774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e3498 00:29:25.066 [2024-07-16 00:05:40.192297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.192313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.202792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e2c28 00:29:25.066 [2024-07-16 00:05:40.204286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.204302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.214765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e23b8 00:29:25.066 [2024-07-16 00:05:40.216247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.216263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.226787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e1b48 00:29:25.066 [2024-07-16 00:05:40.228240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.228256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.238802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e12d8 00:29:25.066 [2024-07-16 00:05:40.240237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.240253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:25.066 [2024-07-16 00:05:40.250780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e0a68 00:29:25.066 [2024-07-16 00:05:40.252187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.066 [2024-07-16 00:05:40.252204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.262761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e01f8 00:29:25.328 [2024-07-16 00:05:40.264164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.264180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.274757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190df988 00:29:25.328 [2024-07-16 00:05:40.276130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.276146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.286757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190df118 00:29:25.328 [2024-07-16 00:05:40.288107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.288123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.298748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f0bc0 00:29:25.328 [2024-07-16 00:05:40.300074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.300089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.310746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f1430 00:29:25.328 [2024-07-16 00:05:40.312056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.312072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.322752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f1ca0 00:29:25.328 [2024-07-16 00:05:40.324041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.324056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.334762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f2510 00:29:25.328 [2024-07-16 00:05:40.336032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.336048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.346771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f2d80 00:29:25.328 [2024-07-16 00:05:40.348022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.348037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.358768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f35f0 00:29:25.328 [2024-07-16 00:05:40.360001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.360017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.370754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f3e60 00:29:25.328 [2024-07-16 00:05:40.371965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.371981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.382740] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f46d0 00:29:25.328 [2024-07-16 00:05:40.383930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.383945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.394719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f4f40 00:29:25.328 [2024-07-16 00:05:40.395889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.395904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.406721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f57b0 00:29:25.328 [2024-07-16 00:05:40.407869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.407885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.418730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f6020 00:29:25.328 [2024-07-16 00:05:40.419854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.328 [2024-07-16 00:05:40.419874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:25.328 [2024-07-16 00:05:40.430768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f6890 00:29:25.329 [2024-07-16 00:05:40.431873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.329 [2024-07-16 00:05:40.431889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:25.329 [2024-07-16 00:05:40.442790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f7100 00:29:25.329 [2024-07-16 00:05:40.443875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.329 [2024-07-16 00:05:40.443891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:25.329 [2024-07-16 00:05:40.454796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f7970 00:29:25.329 [2024-07-16 00:05:40.455864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.329 [2024-07-16 00:05:40.455879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:25.329 [2024-07-16 00:05:40.466838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f81e0 00:29:25.329 [2024-07-16 00:05:40.467885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.329 [2024-07-16 00:05:40.467900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:25.329 [2024-07-16 00:05:40.481349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e7c50 00:29:25.329 [2024-07-16 00:05:40.483043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.329 [2024-07-16 00:05:40.483059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.329 [2024-07-16 00:05:40.493393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e73e0 00:29:25.329 [2024-07-16 00:05:40.495061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.329 [2024-07-16 00:05:40.495076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:25.329 [2024-07-16 00:05:40.505430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e6b70 00:29:25.329 [2024-07-16 00:05:40.507077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.329 [2024-07-16 00:05:40.507093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:25.329 [2024-07-16 00:05:40.515964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e6300 00:29:25.329 [2024-07-16 00:05:40.516963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.329 [2024-07-16 00:05:40.516979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:25.590 [2024-07-16 00:05:40.527303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e5a90 00:29:25.590 [2024-07-16 00:05:40.528279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.590 [2024-07-16 00:05:40.528295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:25.590 [2024-07-16 00:05:40.539362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e5220 00:29:25.590 [2024-07-16 00:05:40.540318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.590 [2024-07-16 00:05:40.540334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:25.590 [2024-07-16 00:05:40.551403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e49b0 00:29:25.590 [2024-07-16 00:05:40.552332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.590 [2024-07-16 00:05:40.552348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:25.590 [2024-07-16 00:05:40.563400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e4140 00:29:25.590 [2024-07-16 00:05:40.564320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.590 [2024-07-16 00:05:40.564336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:25.590 [2024-07-16 00:05:40.575425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e38d0 00:29:25.590 [2024-07-16 00:05:40.576330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.590 [2024-07-16 00:05:40.576345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:25.590 [2024-07-16 00:05:40.587464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e3060 00:29:25.590 [2024-07-16 00:05:40.588339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.590 [2024-07-16 00:05:40.588355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:25.590 [2024-07-16 00:05:40.599481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e27f0 00:29:25.590 [2024-07-16 00:05:40.600335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.590 [2024-07-16 00:05:40.600352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:25.590 [2024-07-16 00:05:40.611480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e1f80 00:29:25.590 [2024-07-16 00:05:40.612314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.612330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.623499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e1710 00:29:25.591 [2024-07-16 00:05:40.624314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.624330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.635533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e0ea0 00:29:25.591 [2024-07-16 00:05:40.636323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.636339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.647564] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e0630 00:29:25.591 [2024-07-16 00:05:40.648336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.648351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.659581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ddc00 00:29:25.591 [2024-07-16 00:05:40.660336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.660353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.671615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190de470 00:29:25.591 [2024-07-16 00:05:40.672345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.672361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.688349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190de470 00:29:25.591 [2024-07-16 00:05:40.690361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.690376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.700377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ddc00 00:29:25.591 [2024-07-16 00:05:40.702364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.702380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.712406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e0630 00:29:25.591 [2024-07-16 00:05:40.714373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.714389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.722176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ee5c8 00:29:25.591 [2024-07-16 00:05:40.723491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.723507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.734237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190edd58 00:29:25.591 [2024-07-16 00:05:40.735528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.735546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.746259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ed4e8 00:29:25.591 [2024-07-16 00:05:40.747529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.747544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.758297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ecc78 00:29:25.591 [2024-07-16 00:05:40.759546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.759563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:25.591 [2024-07-16 00:05:40.770331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ec408 00:29:25.591 [2024-07-16 00:05:40.771563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.591 [2024-07-16 00:05:40.771579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.782357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ebb98 00:29:25.853 [2024-07-16 00:05:40.783572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.783588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.794411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190eb328 00:29:25.853 [2024-07-16 00:05:40.795596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.795611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.806391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190eaab8 00:29:25.853 [2024-07-16 00:05:40.807558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.807574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.819311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f4b08 00:29:25.853 [2024-07-16 00:05:40.820466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.820482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.830513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f5378 00:29:25.853 [2024-07-16 00:05:40.831654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.831670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.842535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f5be8 00:29:25.853 [2024-07-16 00:05:40.843655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.843671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.854561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f6458 00:29:25.853 [2024-07-16 00:05:40.855657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.855672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.866578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f6cc8 00:29:25.853 [2024-07-16 00:05:40.867655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.867671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.878574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f7538 00:29:25.853 [2024-07-16 00:05:40.879631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.879647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.890567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f7da8 00:29:25.853 [2024-07-16 00:05:40.891604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.891620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.902566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f8618 00:29:25.853 [2024-07-16 00:05:40.903581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.903597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.914585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f8e88 00:29:25.853 [2024-07-16 00:05:40.915583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.915598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.926605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f96f8 00:29:25.853 [2024-07-16 00:05:40.927580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.927596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.941072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e95a0 00:29:25.853 [2024-07-16 00:05:40.942696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.942712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.953082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190e9e10 00:29:25.853 [2024-07-16 00:05:40.954688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.954705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.965095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ea680 00:29:25.853 [2024-07-16 00:05:40.966678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.966694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.977118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190eaef0 00:29:25.853 [2024-07-16 00:05:40.978678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.978694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:40.989125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190eb760 00:29:25.853 [2024-07-16 00:05:40.990667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:40.990683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:41.001170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ebfd0 00:29:25.853 [2024-07-16 00:05:41.002692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:41.002708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:41.013339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ec840 00:29:25.853 [2024-07-16 00:05:41.014831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:41.014847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:41.025374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ed0b0 00:29:25.853 [2024-07-16 00:05:41.026844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.853 [2024-07-16 00:05:41.026861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:25.853 [2024-07-16 00:05:41.037390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ed920 00:29:25.853 [2024-07-16 00:05:41.038846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:25.854 [2024-07-16 00:05:41.038862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.049419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ee190 00:29:26.115 [2024-07-16 00:05:41.050851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.050869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.061453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190eea00 00:29:26.115 [2024-07-16 00:05:41.062870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.062885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.073641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190ef270 00:29:26.115 [2024-07-16 00:05:41.075030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.075046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.085644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f0350 00:29:26.115 [2024-07-16 00:05:41.087017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.087033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.097655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fda78 00:29:26.115 [2024-07-16 00:05:41.099004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.099020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.109649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fe2e8 00:29:26.115 [2024-07-16 00:05:41.110981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.110997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.121669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fef90 00:29:26.115 [2024-07-16 00:05:41.122981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.122997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.133692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190feb58 00:29:26.115 [2024-07-16 00:05:41.134983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.134999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.145891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fd208 00:29:26.115 [2024-07-16 00:05:41.147163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.147179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.157936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fc998 00:29:26.115 [2024-07-16 00:05:41.159191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.159207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.169993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fc128 00:29:26.115 [2024-07-16 00:05:41.171223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.171242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.181980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fb8b8 00:29:26.115 [2024-07-16 00:05:41.183190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.183206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.193964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fb048 00:29:26.115 [2024-07-16 00:05:41.195149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.195166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.205986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190fa7d8 00:29:26.115 [2024-07-16 00:05:41.207153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.207170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.217973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f9f68 00:29:26.115 [2024-07-16 00:05:41.219120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.219136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.229996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f96f8 00:29:26.115 [2024-07-16 00:05:41.231125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.231141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.242008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f8e88 00:29:26.115 [2024-07-16 00:05:41.243111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.243127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.254015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f8618 00:29:26.115 [2024-07-16 00:05:41.255101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.255116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.266043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f7da8 00:29:26.115 [2024-07-16 00:05:41.267109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.115 [2024-07-16 00:05:41.267125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:26.115 [2024-07-16 00:05:41.278051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f7538 00:29:26.115 [2024-07-16 00:05:41.279099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.116 [2024-07-16 00:05:41.279115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.116 [2024-07-16 00:05:41.290034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f6cc8 00:29:26.116 [2024-07-16 00:05:41.291059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.116 [2024-07-16 00:05:41.291075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:26.116 [2024-07-16 00:05:41.302052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f6458 00:29:26.116 [2024-07-16 00:05:41.303058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.116 [2024-07-16 00:05:41.303074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.314052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f5be8 00:29:26.377 [2024-07-16 00:05:41.315038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.315055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.326081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f5378 00:29:26.377 [2024-07-16 00:05:41.327046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.327062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.338064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f4b08 00:29:26.377 [2024-07-16 00:05:41.339009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.339025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.350063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f4298 00:29:26.377 [2024-07-16 00:05:41.350988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.351004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.362069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f3a28 00:29:26.377 [2024-07-16 00:05:41.362976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.362996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.374109] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f31b8 00:29:26.377 [2024-07-16 00:05:41.374992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.375009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.386162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f2948 00:29:26.377 [2024-07-16 00:05:41.387028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.387044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.398165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f20d8 00:29:26.377 [2024-07-16 00:05:41.399007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.399023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.410162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f1868 00:29:26.377 [2024-07-16 00:05:41.410982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.410999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.422189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f0ff8 00:29:26.377 [2024-07-16 00:05:41.422988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.423005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.434204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190f0788 00:29:26.377 [2024-07-16 00:05:41.434984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.435000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:26.377 [2024-07-16 00:05:41.446217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6920) with pdu=0x2000190df550 00:29:26.377 [2024-07-16 00:05:41.446977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.377 [2024-07-16 00:05:41.446993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:26.377 00:29:26.377 Latency(us) 00:29:26.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.377 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.377 nvme0n1 : 2.01 21041.81 82.19 0.00 0.00 6077.38 3850.24 16602.45 00:29:26.377 =================================================================================================================== 00:29:26.377 Total : 21041.81 82.19 0.00 0.00 6077.38 3850.24 16602.45 00:29:26.377 0 00:29:26.377 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:26.377 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:26.377 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:26.377 | .driver_specific 00:29:26.377 | .nvme_error 00:29:26.377 | .status_code 00:29:26.377 | .command_transient_transport_error' 00:29:26.377 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 649468 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 649468 ']' 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 649468 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 649468 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 649468' 00:29:26.638 killing process with pid 649468 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 649468 00:29:26.638 Received shutdown signal, test time was about 2.000000 seconds 00:29:26.638 00:29:26.638 Latency(us) 00:29:26.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.638 =================================================================================================================== 00:29:26.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 649468 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=650182 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 650182 /var/tmp/bperf.sock 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 650182 ']' 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:26.638 00:05:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.898 [2024-07-16 00:05:41.854427] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:26.898 [2024-07-16 00:05:41.854486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650182 ] 00:29:26.898 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:26.898 Zero copy mechanism will not be used. 00:29:26.898 [2024-07-16 00:05:41.934803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.898 [2024-07-16 00:05:41.988278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.517 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:27.517 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:29:27.517 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.517 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.836 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:27.836 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:27.836 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.836 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:27.837 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.837 00:05:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.098 nvme0n1 00:29:28.098 00:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:28.098 00:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:28.098 00:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.098 00:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:28.098 00:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:28.098 00:05:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:28.098 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:28.098 Zero copy mechanism will not be used. 00:29:28.098 Running I/O for 2 seconds... 00:29:28.098 [2024-07-16 00:05:43.265170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.098 [2024-07-16 00:05:43.265588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.098 [2024-07-16 00:05:43.265616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.098 [2024-07-16 00:05:43.276997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.098 [2024-07-16 00:05:43.277376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.098 [2024-07-16 00:05:43.277396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.098 [2024-07-16 00:05:43.286925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.098 [2024-07-16 00:05:43.287294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.098 [2024-07-16 00:05:43.287313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.295966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.296266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.296283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.303574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.303889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.303908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.310941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.311292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.311309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.318443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.318760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.318778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.327873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.328224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.328246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.338398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.338840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.338857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.349901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.350191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.350209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.359843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.360033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.360049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.368658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.368890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.368906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.379098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.379340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.379356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.390941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.391308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.391325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.402744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.402982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.402999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.415690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.416040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.416057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.428020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.428290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.428307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.440574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.358 [2024-07-16 00:05:43.440930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.358 [2024-07-16 00:05:43.440949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.358 [2024-07-16 00:05:43.452307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.359 [2024-07-16 00:05:43.452662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-07-16 00:05:43.452679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.359 [2024-07-16 00:05:43.464250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.359 [2024-07-16 00:05:43.464601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-07-16 00:05:43.464622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.359 [2024-07-16 00:05:43.475511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.359 [2024-07-16 00:05:43.475742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-07-16 00:05:43.475759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.359 [2024-07-16 00:05:43.487980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.359 [2024-07-16 00:05:43.488315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-07-16 00:05:43.488332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.359 [2024-07-16 00:05:43.499317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.359 [2024-07-16 00:05:43.499754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-07-16 00:05:43.499771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.359 [2024-07-16 00:05:43.510279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.359 [2024-07-16 00:05:43.510421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-07-16 00:05:43.510437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.359 [2024-07-16 00:05:43.521708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.359 [2024-07-16 00:05:43.521992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-07-16 00:05:43.522017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.359 [2024-07-16 00:05:43.532923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.359 [2024-07-16 00:05:43.533028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-07-16 00:05:43.533044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.359 [2024-07-16 00:05:43.545190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.359 [2024-07-16 00:05:43.545508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-07-16 00:05:43.545525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.556559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.556855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.556872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.568501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.568915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.568932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.580193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.580543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.580560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.592461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.592697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.592714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.604828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.605222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.605245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.615778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.616122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.616140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.626754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.627060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.627076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.636706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.637041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.637059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.646789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.647018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.647035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.658014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.658378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.658395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.667851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.668081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.668098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.675914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.676186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.676203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.682198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.682431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.682447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.688412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.688639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.688655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.697150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.697368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.697385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.706349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.706613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.706630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.713150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.713369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.713385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.719443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.719658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.719675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.725378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.725718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.725738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.734223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.734570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.734587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.741276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.741609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.741626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.746963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.747344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.747361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.752651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.752959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.752976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.758376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.758680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.758698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.765885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.766185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.766202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.771821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.772031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.772047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.777856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.778076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.778093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.783628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.783842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.783858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.791493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.791710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.791726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.798797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.799010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.799026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.619 [2024-07-16 00:05:43.805121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.619 [2024-07-16 00:05:43.805339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.619 [2024-07-16 00:05:43.805355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.880 [2024-07-16 00:05:43.812895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.880 [2024-07-16 00:05:43.813209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.813226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.821964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.822292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.822310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.831453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.831671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.831687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.842917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.843147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.843163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.851874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.852218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.852240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.860131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.860303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.860318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.867678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.867992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.868010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.875118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.875340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.875356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.880714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.880938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.880954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.887166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.887383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.887401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.892896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.893108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.893124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.899483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.899692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.899708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.905803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.906110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.906127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.912870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.913083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.913103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.921334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.921645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.921663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.927730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.928037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.928054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.936864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.937180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.937197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.943255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.943591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.943607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.952317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.952646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.952662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.961413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.961629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.961645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.966676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.966894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.966910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.971890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.972104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.972119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.976928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.977138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.977154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.981415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.981627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.981644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.987300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.987607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.987624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.992821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.993030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.993046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:43.997682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:43.997891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:43.997907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:44.002900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:44.003119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:44.003135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:44.008499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.881 [2024-07-16 00:05:44.008710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.881 [2024-07-16 00:05:44.008726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.881 [2024-07-16 00:05:44.014020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.882 [2024-07-16 00:05:44.014245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.882 [2024-07-16 00:05:44.014261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.882 [2024-07-16 00:05:44.020358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.882 [2024-07-16 00:05:44.020577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.882 [2024-07-16 00:05:44.020596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.882 [2024-07-16 00:05:44.025201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.882 [2024-07-16 00:05:44.025429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.882 [2024-07-16 00:05:44.025445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.882 [2024-07-16 00:05:44.032181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.882 [2024-07-16 00:05:44.032502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.882 [2024-07-16 00:05:44.032519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.882 [2024-07-16 00:05:44.040083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.882 [2024-07-16 00:05:44.040194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.882 [2024-07-16 00:05:44.040210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.882 [2024-07-16 00:05:44.048875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.882 [2024-07-16 00:05:44.048960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.882 [2024-07-16 00:05:44.048975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.882 [2024-07-16 00:05:44.056340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.882 [2024-07-16 00:05:44.056409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.882 [2024-07-16 00:05:44.056424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.882 [2024-07-16 00:05:44.065187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:28.882 [2024-07-16 00:05:44.065415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.882 [2024-07-16 00:05:44.065432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.072576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.072993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.073011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.082582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.082907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.082923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.092247] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.092625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.092642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.101753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.102071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.102088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.112471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.112787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.112803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.122750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.123072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.123089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.130956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.131260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.131277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.140606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.140905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.140923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.149330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.149709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.149726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.157348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.157666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.157683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.164033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.164141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.164156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.172158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.172486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.172503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.181982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.182384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.182401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.189298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.189615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.189631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.198475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.198784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.198800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.206949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.207271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.207288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.215997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.216225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.216246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.226261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.226352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.226366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.237714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.143 [2024-07-16 00:05:44.238016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.143 [2024-07-16 00:05:44.238033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.143 [2024-07-16 00:05:44.248529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.144 [2024-07-16 00:05:44.248617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.144 [2024-07-16 00:05:44.248635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.144 [2024-07-16 00:05:44.260707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.144 [2024-07-16 00:05:44.260859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.144 [2024-07-16 00:05:44.260874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.144 [2024-07-16 00:05:44.272950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.144 [2024-07-16 00:05:44.273292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.144 [2024-07-16 00:05:44.273310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.144 [2024-07-16 00:05:44.284033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.144 [2024-07-16 00:05:44.284394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.144 [2024-07-16 00:05:44.284412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.144 [2024-07-16 00:05:44.296798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.144 [2024-07-16 00:05:44.297148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.144 [2024-07-16 00:05:44.297165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.144 [2024-07-16 00:05:44.307529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.144 [2024-07-16 00:05:44.307609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.144 [2024-07-16 00:05:44.307624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.144 [2024-07-16 00:05:44.319164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.144 [2024-07-16 00:05:44.319527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.144 [2024-07-16 00:05:44.319544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.144 [2024-07-16 00:05:44.330251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.144 [2024-07-16 00:05:44.330379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.144 [2024-07-16 00:05:44.330395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.340118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.340353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.340369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.349544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.349857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.349874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.357338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.357662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.357679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.365155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.365506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.365523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.373350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.373704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.373721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.380137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.380491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.380508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.387098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.387327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.387342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.393551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.393855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.393872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.401027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.401356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.401374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.406554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.406909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.406926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.413000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.413224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.413245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.419980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.420291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.420309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.427201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.427431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.427447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.434587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.434800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.434817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.444743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.445061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.445078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.454292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.454395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.454410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.463853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.464085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.464101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.473978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.474290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.474307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.483499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.483730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.483749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.494603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.494946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.494964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.505952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.405 [2024-07-16 00:05:44.506273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.405 [2024-07-16 00:05:44.506290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.405 [2024-07-16 00:05:44.517515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.406 [2024-07-16 00:05:44.517863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.406 [2024-07-16 00:05:44.517880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.406 [2024-07-16 00:05:44.528720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.406 [2024-07-16 00:05:44.528865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.406 [2024-07-16 00:05:44.528880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.406 [2024-07-16 00:05:44.538891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.406 [2024-07-16 00:05:44.539236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.406 [2024-07-16 00:05:44.539253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.406 [2024-07-16 00:05:44.546650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.406 [2024-07-16 00:05:44.546972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.406 [2024-07-16 00:05:44.546989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.406 [2024-07-16 00:05:44.556295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.406 [2024-07-16 00:05:44.556660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.406 [2024-07-16 00:05:44.556677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.406 [2024-07-16 00:05:44.565667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.406 [2024-07-16 00:05:44.565968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.406 [2024-07-16 00:05:44.565986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.406 [2024-07-16 00:05:44.575478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.406 [2024-07-16 00:05:44.575708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.406 [2024-07-16 00:05:44.575724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.406 [2024-07-16 00:05:44.582576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.406 [2024-07-16 00:05:44.582795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.406 [2024-07-16 00:05:44.582812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.406 [2024-07-16 00:05:44.589464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.406 [2024-07-16 00:05:44.589680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.406 [2024-07-16 00:05:44.589697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.597554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.597909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.597926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.606511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.606724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.606740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.614424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.614867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.614884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.624301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.624543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.624558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.631820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.632113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.632130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.639072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.639306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.639328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.646497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.646716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.646732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.654544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.654897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.654914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.662684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.662915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.662932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.671432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.671759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.671775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.678851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.679054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.679070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.688611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.688925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.688941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.697197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.697411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.697427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.706405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.706712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.706729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.715060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.715375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.715392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.724146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.724472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.724489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.733843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.734073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.734090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.666 [2024-07-16 00:05:44.743390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.666 [2024-07-16 00:05:44.743685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.666 [2024-07-16 00:05:44.743702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.752997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.753319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.753336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.764070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.764445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.764463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.773488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.773902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.773920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.782220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.782565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.782583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.791810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.792187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.792204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.801363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.801665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.801682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.811462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.811794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.811812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.820539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.820828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.820845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.830911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.831248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.831265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.841125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.841518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.841535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.667 [2024-07-16 00:05:44.850655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.667 [2024-07-16 00:05:44.850975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.667 [2024-07-16 00:05:44.850992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.860375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.860584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.860601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.870030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.870281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.870306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.878723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.878965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.878985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.885410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.885702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.885720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.894090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.894418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.894436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.900987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.901203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.901220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.908370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.908699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.908716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.916921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.917243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.917260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.924506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.924718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.924734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.933276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.933618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.933635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.941352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.941792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.941810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.948865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.949085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.949101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.958908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.959251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.959269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.967115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.967326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.967343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.972872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.973169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.973186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.978502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.978752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.978769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.988065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.988372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.988389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:44.995341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.928 [2024-07-16 00:05:44.995560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.928 [2024-07-16 00:05:44.995576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.928 [2024-07-16 00:05:45.005140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.005496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.005513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.014361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.014681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.014697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.024654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.024966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.024983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.035043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.035368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.035384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.047223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.047625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.047642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.057077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.057403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.057420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.065868] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.066195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.066212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.075363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.075760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.075778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.083454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.083699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.083717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.091981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.092190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.092206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.100574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.100889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.100909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:29.929 [2024-07-16 00:05:45.109776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:29.929 [2024-07-16 00:05:45.110029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.929 [2024-07-16 00:05:45.110046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.120541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.120945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.120962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.130539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.130876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.130894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.140291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.140468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.140484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.150158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.150377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.150393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.160277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.160616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.160633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.170198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.170573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.170590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.180171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.180533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.180550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.189701] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.190029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.190046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.199224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.199443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.199459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.209129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.209453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.209470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.217470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.217806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.217823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.226651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.226950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.226967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.234591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.234854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.234871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.241788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.241992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.242008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.248560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.248764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.248780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.190 [2024-07-16 00:05:45.255073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21c6bf0) with pdu=0x2000190fef90 00:29:30.190 [2024-07-16 00:05:45.255197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.190 [2024-07-16 00:05:45.255213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.190 00:29:30.190 Latency(us) 00:29:30.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.190 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:30.190 nvme0n1 : 2.00 3513.29 439.16 0.00 0.00 4547.08 2020.69 12888.75 00:29:30.190 =================================================================================================================== 00:29:30.190 Total : 3513.29 439.16 0.00 0.00 4547.08 2020.69 12888.75 00:29:30.190 0 00:29:30.190 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:30.190 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:30.190 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:30.190 | .driver_specific 00:29:30.190 | .nvme_error 00:29:30.190 | .status_code 00:29:30.190 | .command_transient_transport_error' 00:29:30.190 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 227 > 0 )) 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 650182 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 650182 ']' 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 650182 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 650182 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 650182' 00:29:30.451 killing process with pid 650182 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 650182 00:29:30.451 Received shutdown signal, test time was about 2.000000 seconds 00:29:30.451 00:29:30.451 Latency(us) 00:29:30.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.451 =================================================================================================================== 00:29:30.451 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 650182 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 647783 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 647783 ']' 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 647783 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:30.451 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 647783 00:29:30.711 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:29:30.711 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:29:30.711 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 647783' 00:29:30.712 killing process with pid 647783 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 647783 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 647783 00:29:30.712 00:29:30.712 real 0m16.351s 00:29:30.712 user 0m32.053s 00:29:30.712 sys 0m3.311s 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.712 ************************************ 00:29:30.712 END TEST nvmf_digest_error 00:29:30.712 ************************************ 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1136 -- # return 0 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:30.712 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:30.712 rmmod nvme_tcp 00:29:30.712 rmmod nvme_fabrics 00:29:30.712 rmmod nvme_keyring 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 647783 ']' 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 647783 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@942 -- # '[' -z 647783 ']' 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # kill -0 647783 00:29:30.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (647783) - No such process 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@969 -- # echo 'Process with pid 647783 is not found' 00:29:30.972 Process with pid 647783 is not found 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.972 00:05:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.881 00:05:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:32.881 00:29:32.881 real 0m43.398s 00:29:32.881 user 1m6.384s 00:29:32.881 sys 0m12.887s 00:29:32.881 00:05:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:32.881 00:05:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:32.881 ************************************ 00:29:32.881 END TEST nvmf_digest 00:29:32.881 ************************************ 00:29:32.881 00:05:48 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:29:32.881 00:05:48 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:32.881 00:05:48 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:32.881 00:05:48 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:32.881 00:05:48 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:32.881 00:05:48 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:29:32.881 00:05:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:29:32.881 00:05:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.881 ************************************ 00:29:32.881 START TEST nvmf_bdevperf 00:29:32.881 ************************************ 00:29:32.881 00:05:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:33.142 * Looking for test storage... 00:29:33.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.142 00:05:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:33.143 00:05:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.284 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.284 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:41.284 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:41.284 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:41.284 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:41.284 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:41.284 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:41.284 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:41.285 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:41.285 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:41.285 Found net devices under 0000:31:00.0: cvl_0_0 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:41.285 Found net devices under 0000:31:00.1: cvl_0_1 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:41.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:29:41.285 00:29:41.285 --- 10.0.0.2 ping statistics --- 00:29:41.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.285 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:29:41.285 00:29:41.285 --- 10.0.0.1 ping statistics --- 00:29:41.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.285 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=655560 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 655560 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 655560 ']' 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:41.285 00:05:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.285 [2024-07-16 00:05:56.451729] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:41.285 [2024-07-16 00:05:56.451778] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.546 [2024-07-16 00:05:56.542701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:41.546 [2024-07-16 00:05:56.618718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.546 [2024-07-16 00:05:56.618772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.546 [2024-07-16 00:05:56.618780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.546 [2024-07-16 00:05:56.618787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.546 [2024-07-16 00:05:56.618793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.546 [2024-07-16 00:05:56.618918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.546 [2024-07-16 00:05:56.619089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.546 [2024-07-16 00:05:56.619089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.118 [2024-07-16 00:05:57.265602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.118 Malloc0 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:42.118 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.380 [2024-07-16 00:05:57.331712] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:42.380 { 00:29:42.380 "params": { 00:29:42.380 "name": "Nvme$subsystem", 00:29:42.380 "trtype": "$TEST_TRANSPORT", 00:29:42.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.380 "adrfam": "ipv4", 00:29:42.380 "trsvcid": "$NVMF_PORT", 00:29:42.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.380 "hdgst": ${hdgst:-false}, 00:29:42.380 "ddgst": ${ddgst:-false} 00:29:42.380 }, 00:29:42.380 "method": "bdev_nvme_attach_controller" 00:29:42.380 } 00:29:42.380 EOF 00:29:42.380 )") 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:42.380 00:05:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:42.380 "params": { 00:29:42.380 "name": "Nvme1", 00:29:42.380 "trtype": "tcp", 00:29:42.380 "traddr": "10.0.0.2", 00:29:42.380 "adrfam": "ipv4", 00:29:42.380 "trsvcid": "4420", 00:29:42.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:42.380 "hdgst": false, 00:29:42.380 "ddgst": false 00:29:42.380 }, 00:29:42.380 "method": "bdev_nvme_attach_controller" 00:29:42.380 }' 00:29:42.380 [2024-07-16 00:05:57.391430] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:42.380 [2024-07-16 00:05:57.391502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655692 ] 00:29:42.380 [2024-07-16 00:05:57.459889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.380 [2024-07-16 00:05:57.524465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.642 Running I/O for 1 seconds... 00:29:43.586 00:29:43.586 Latency(us) 00:29:43.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:43.586 Verification LBA range: start 0x0 length 0x4000 00:29:43.586 Nvme1n1 : 1.01 8897.68 34.76 0.00 0.00 14327.01 3099.31 15400.96 00:29:43.586 =================================================================================================================== 00:29:43.586 Total : 8897.68 34.76 0.00 0.00 14327.01 3099.31 15400.96 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=655936 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:43.847 { 00:29:43.847 "params": { 00:29:43.847 "name": "Nvme$subsystem", 00:29:43.847 "trtype": "$TEST_TRANSPORT", 00:29:43.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:43.847 "adrfam": "ipv4", 00:29:43.847 "trsvcid": "$NVMF_PORT", 00:29:43.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:43.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:43.847 "hdgst": ${hdgst:-false}, 00:29:43.847 "ddgst": ${ddgst:-false} 00:29:43.847 }, 00:29:43.847 "method": "bdev_nvme_attach_controller" 00:29:43.847 } 00:29:43.847 EOF 00:29:43.847 )") 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:43.847 00:05:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:43.847 "params": { 00:29:43.847 "name": "Nvme1", 00:29:43.847 "trtype": "tcp", 00:29:43.848 "traddr": "10.0.0.2", 00:29:43.848 "adrfam": "ipv4", 00:29:43.848 "trsvcid": "4420", 00:29:43.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:43.848 "hdgst": false, 00:29:43.848 "ddgst": false 00:29:43.848 }, 00:29:43.848 "method": "bdev_nvme_attach_controller" 00:29:43.848 }' 00:29:43.848 [2024-07-16 00:05:58.907719] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:43.848 [2024-07-16 00:05:58.907773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655936 ] 00:29:43.848 [2024-07-16 00:05:58.974619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.848 [2024-07-16 00:05:59.036259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.110 Running I/O for 15 seconds... 00:29:47.415 00:06:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 655560 00:29:47.415 00:06:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:47.415 [2024-07-16 00:06:01.876168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.415 [2024-07-16 00:06:01.876212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.415 [2024-07-16 00:06:01.876336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.415 [2024-07-16 00:06:01.876346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.415 [2024-07-16 00:06:01.876358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.415 [2024-07-16 00:06:01.876366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.415 [2024-07-16 00:06:01.876377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.415 [2024-07-16 00:06:01.876388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.415 [2024-07-16 00:06:01.876400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.415 [2024-07-16 00:06:01.876411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.415 [2024-07-16 00:06:01.876427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.415 [2024-07-16 00:06:01.876439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.415 [2024-07-16 00:06:01.876451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.415 [2024-07-16 00:06:01.876461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.415 [2024-07-16 00:06:01.876473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.415 [2024-07-16 00:06:01.876482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.876987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.876997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.877004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.877014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.877021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.877030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.877037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.877047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.416 [2024-07-16 00:06:01.877055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.416 [2024-07-16 00:06:01.877064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.417 [2024-07-16 00:06:01.877626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.417 [2024-07-16 00:06:01.877634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.877988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.877998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.878005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.878021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.878038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.878055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.878072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.878089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.418 [2024-07-16 00:06:01.878105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.418 [2024-07-16 00:06:01.878122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.418 [2024-07-16 00:06:01.878139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.418 [2024-07-16 00:06:01.878157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.418 [2024-07-16 00:06:01.878173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.418 [2024-07-16 00:06:01.878189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.418 [2024-07-16 00:06:01.878199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.418 [2024-07-16 00:06:01.878206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.419 [2024-07-16 00:06:01.878224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:47.419 [2024-07-16 00:06:01.878520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf7940 is same with the state(5) to be set 00:29:47.419 [2024-07-16 00:06:01.878537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:47.419 [2024-07-16 00:06:01.878543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:47.419 [2024-07-16 00:06:01.878550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106392 len:8 PRP1 0x0 PRP2 0x0 00:29:47.419 [2024-07-16 00:06:01.878558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.419 [2024-07-16 00:06:01.878597] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbf7940 was disconnected and freed. reset controller. 00:29:47.419 [2024-07-16 00:06:01.882189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.419 [2024-07-16 00:06:01.882242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.419 [2024-07-16 00:06:01.883035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.419 [2024-07-16 00:06:01.883051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.419 [2024-07-16 00:06:01.883060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.419 [2024-07-16 00:06:01.883283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.419 [2024-07-16 00:06:01.883501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.419 [2024-07-16 00:06:01.883510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.419 [2024-07-16 00:06:01.883519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.419 [2024-07-16 00:06:01.887018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.419 [2024-07-16 00:06:01.896309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.419 [2024-07-16 00:06:01.896921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.419 [2024-07-16 00:06:01.896938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.419 [2024-07-16 00:06:01.896946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.419 [2024-07-16 00:06:01.897163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.419 [2024-07-16 00:06:01.897389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.419 [2024-07-16 00:06:01.897399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.419 [2024-07-16 00:06:01.897406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.419 [2024-07-16 00:06:01.900903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.419 [2024-07-16 00:06:01.910190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.419 [2024-07-16 00:06:01.910904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.419 [2024-07-16 00:06:01.910942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.419 [2024-07-16 00:06:01.910954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.420 [2024-07-16 00:06:01.911193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.420 [2024-07-16 00:06:01.911428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.420 [2024-07-16 00:06:01.911438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.420 [2024-07-16 00:06:01.911447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.420 [2024-07-16 00:06:01.914964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.420 [2024-07-16 00:06:01.924060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.420 [2024-07-16 00:06:01.924741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.420 [2024-07-16 00:06:01.924780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.420 [2024-07-16 00:06:01.924791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.420 [2024-07-16 00:06:01.925032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.420 [2024-07-16 00:06:01.925261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.420 [2024-07-16 00:06:01.925271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.420 [2024-07-16 00:06:01.925279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.420 [2024-07-16 00:06:01.928788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.420 [2024-07-16 00:06:01.937884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.420 [2024-07-16 00:06:01.938665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.420 [2024-07-16 00:06:01.938703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.420 [2024-07-16 00:06:01.938714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.420 [2024-07-16 00:06:01.938951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.420 [2024-07-16 00:06:01.939172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.420 [2024-07-16 00:06:01.939182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.420 [2024-07-16 00:06:01.939190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.420 [2024-07-16 00:06:01.942702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.420 [2024-07-16 00:06:01.951788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.420 [2024-07-16 00:06:01.952450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.420 [2024-07-16 00:06:01.952490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.420 [2024-07-16 00:06:01.952503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.420 [2024-07-16 00:06:01.952740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.420 [2024-07-16 00:06:01.952963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.420 [2024-07-16 00:06:01.952973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.420 [2024-07-16 00:06:01.952981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.420 [2024-07-16 00:06:01.956495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.420 [2024-07-16 00:06:01.965579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.420 [2024-07-16 00:06:01.966295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.420 [2024-07-16 00:06:01.966333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.420 [2024-07-16 00:06:01.966346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.420 [2024-07-16 00:06:01.966587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.420 [2024-07-16 00:06:01.966807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.420 [2024-07-16 00:06:01.966818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.420 [2024-07-16 00:06:01.966830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.420 [2024-07-16 00:06:01.970340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.420 [2024-07-16 00:06:01.979421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.420 [2024-07-16 00:06:01.980063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.420 [2024-07-16 00:06:01.980101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.420 [2024-07-16 00:06:01.980112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.420 [2024-07-16 00:06:01.980356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.420 [2024-07-16 00:06:01.980578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.420 [2024-07-16 00:06:01.980588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.420 [2024-07-16 00:06:01.980596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.420 [2024-07-16 00:06:01.984098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.420 [2024-07-16 00:06:01.993179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.420 [2024-07-16 00:06:01.993879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.420 [2024-07-16 00:06:01.993918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.420 [2024-07-16 00:06:01.993929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.420 [2024-07-16 00:06:01.994165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.420 [2024-07-16 00:06:01.994394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.420 [2024-07-16 00:06:01.994404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.420 [2024-07-16 00:06:01.994411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.420 [2024-07-16 00:06:01.997912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.420 [2024-07-16 00:06:02.006992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.420 [2024-07-16 00:06:02.007655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.420 [2024-07-16 00:06:02.007694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.420 [2024-07-16 00:06:02.007705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.420 [2024-07-16 00:06:02.007942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.420 [2024-07-16 00:06:02.008162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.420 [2024-07-16 00:06:02.008172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.420 [2024-07-16 00:06:02.008180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.420 [2024-07-16 00:06:02.011695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.420 [2024-07-16 00:06:02.020804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.420 [2024-07-16 00:06:02.021520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.420 [2024-07-16 00:06:02.021563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.420 [2024-07-16 00:06:02.021574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.420 [2024-07-16 00:06:02.021811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.420 [2024-07-16 00:06:02.022032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.420 [2024-07-16 00:06:02.022041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.420 [2024-07-16 00:06:02.022048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.420 [2024-07-16 00:06:02.025564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.421 [2024-07-16 00:06:02.034654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.421 [2024-07-16 00:06:02.035309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.421 [2024-07-16 00:06:02.035347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.421 [2024-07-16 00:06:02.035360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.421 [2024-07-16 00:06:02.035601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.421 [2024-07-16 00:06:02.035821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.421 [2024-07-16 00:06:02.035830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.421 [2024-07-16 00:06:02.035838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.421 [2024-07-16 00:06:02.039352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.421 [2024-07-16 00:06:02.048440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.421 [2024-07-16 00:06:02.049131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.421 [2024-07-16 00:06:02.049169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.421 [2024-07-16 00:06:02.049180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.421 [2024-07-16 00:06:02.049427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.421 [2024-07-16 00:06:02.049649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.421 [2024-07-16 00:06:02.049658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.421 [2024-07-16 00:06:02.049665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.421 [2024-07-16 00:06:02.053168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.421 [2024-07-16 00:06:02.062253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.421 [2024-07-16 00:06:02.062869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.421 [2024-07-16 00:06:02.062888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.421 [2024-07-16 00:06:02.062896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.421 [2024-07-16 00:06:02.063113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.421 [2024-07-16 00:06:02.063377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.421 [2024-07-16 00:06:02.063388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.421 [2024-07-16 00:06:02.063395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.421 [2024-07-16 00:06:02.066898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.421 [2024-07-16 00:06:02.076189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.421 [2024-07-16 00:06:02.076683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.421 [2024-07-16 00:06:02.076700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.421 [2024-07-16 00:06:02.076708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.421 [2024-07-16 00:06:02.076925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.421 [2024-07-16 00:06:02.077142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.421 [2024-07-16 00:06:02.077150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.421 [2024-07-16 00:06:02.077157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.421 [2024-07-16 00:06:02.080663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.421 [2024-07-16 00:06:02.089946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.421 [2024-07-16 00:06:02.090546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.421 [2024-07-16 00:06:02.090563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.421 [2024-07-16 00:06:02.090571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.421 [2024-07-16 00:06:02.090788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.421 [2024-07-16 00:06:02.091005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.421 [2024-07-16 00:06:02.091014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.421 [2024-07-16 00:06:02.091020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.421 [2024-07-16 00:06:02.094525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.421 [2024-07-16 00:06:02.103814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.421 [2024-07-16 00:06:02.104489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.421 [2024-07-16 00:06:02.104527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.421 [2024-07-16 00:06:02.104538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.421 [2024-07-16 00:06:02.104775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.421 [2024-07-16 00:06:02.104995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.421 [2024-07-16 00:06:02.105004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.421 [2024-07-16 00:06:02.105012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.421 [2024-07-16 00:06:02.108525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.421 [2024-07-16 00:06:02.117629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.421 [2024-07-16 00:06:02.118380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.421 [2024-07-16 00:06:02.118419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.421 [2024-07-16 00:06:02.118432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.421 [2024-07-16 00:06:02.118670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.422 [2024-07-16 00:06:02.118891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.422 [2024-07-16 00:06:02.118900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.422 [2024-07-16 00:06:02.118908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.422 [2024-07-16 00:06:02.122428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.422 [2024-07-16 00:06:02.131505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.422 [2024-07-16 00:06:02.132217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.422 [2024-07-16 00:06:02.132261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.422 [2024-07-16 00:06:02.132274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.422 [2024-07-16 00:06:02.132513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.422 [2024-07-16 00:06:02.132734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.422 [2024-07-16 00:06:02.132743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.422 [2024-07-16 00:06:02.132751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.422 [2024-07-16 00:06:02.136258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.422 [2024-07-16 00:06:02.145362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.422 [2024-07-16 00:06:02.145972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.422 [2024-07-16 00:06:02.145991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.422 [2024-07-16 00:06:02.145999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.422 [2024-07-16 00:06:02.146216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.422 [2024-07-16 00:06:02.146441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.422 [2024-07-16 00:06:02.146450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.422 [2024-07-16 00:06:02.146457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.422 [2024-07-16 00:06:02.149959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.422 [2024-07-16 00:06:02.159254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.422 [2024-07-16 00:06:02.159894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.422 [2024-07-16 00:06:02.159931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.422 [2024-07-16 00:06:02.159946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.422 [2024-07-16 00:06:02.160182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.422 [2024-07-16 00:06:02.160412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.422 [2024-07-16 00:06:02.160422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.422 [2024-07-16 00:06:02.160430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.422 [2024-07-16 00:06:02.163935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.422 [2024-07-16 00:06:02.173033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.422 [2024-07-16 00:06:02.173633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.422 [2024-07-16 00:06:02.173653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.422 [2024-07-16 00:06:02.173661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.422 [2024-07-16 00:06:02.173878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.422 [2024-07-16 00:06:02.174095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.422 [2024-07-16 00:06:02.174103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.422 [2024-07-16 00:06:02.174111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.422 [2024-07-16 00:06:02.177623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.422 [2024-07-16 00:06:02.186920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.422 [2024-07-16 00:06:02.187559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.422 [2024-07-16 00:06:02.187598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.422 [2024-07-16 00:06:02.187609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.422 [2024-07-16 00:06:02.187846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.422 [2024-07-16 00:06:02.188067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.422 [2024-07-16 00:06:02.188076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.422 [2024-07-16 00:06:02.188084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.422 [2024-07-16 00:06:02.191594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.422 [2024-07-16 00:06:02.200680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.422 [2024-07-16 00:06:02.201332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.422 [2024-07-16 00:06:02.201370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.422 [2024-07-16 00:06:02.201383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.422 [2024-07-16 00:06:02.201623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.422 [2024-07-16 00:06:02.201844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.422 [2024-07-16 00:06:02.201858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.422 [2024-07-16 00:06:02.201866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.422 [2024-07-16 00:06:02.205379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.422 [2024-07-16 00:06:02.214483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.422 [2024-07-16 00:06:02.215198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.422 [2024-07-16 00:06:02.215245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.422 [2024-07-16 00:06:02.215257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.422 [2024-07-16 00:06:02.215493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.422 [2024-07-16 00:06:02.215714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.422 [2024-07-16 00:06:02.215723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.422 [2024-07-16 00:06:02.215731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.422 [2024-07-16 00:06:02.219240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.422 [2024-07-16 00:06:02.228336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.422 [2024-07-16 00:06:02.228953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.422 [2024-07-16 00:06:02.228972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.422 [2024-07-16 00:06:02.228980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.422 [2024-07-16 00:06:02.229198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.422 [2024-07-16 00:06:02.229422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.422 [2024-07-16 00:06:02.229432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.422 [2024-07-16 00:06:02.229440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.422 [2024-07-16 00:06:02.232943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.422 [2024-07-16 00:06:02.242246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.422 [2024-07-16 00:06:02.242856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.422 [2024-07-16 00:06:02.242873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.423 [2024-07-16 00:06:02.242881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.423 [2024-07-16 00:06:02.243097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.423 [2024-07-16 00:06:02.243319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.423 [2024-07-16 00:06:02.243329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.423 [2024-07-16 00:06:02.243336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.423 [2024-07-16 00:06:02.246838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.423 [2024-07-16 00:06:02.256130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.423 [2024-07-16 00:06:02.256829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.423 [2024-07-16 00:06:02.256867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.423 [2024-07-16 00:06:02.256878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.423 [2024-07-16 00:06:02.257115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.423 [2024-07-16 00:06:02.257345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.423 [2024-07-16 00:06:02.257356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.423 [2024-07-16 00:06:02.257364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.423 [2024-07-16 00:06:02.260870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.423 [2024-07-16 00:06:02.269966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.423 [2024-07-16 00:06:02.270676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.423 [2024-07-16 00:06:02.270714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.423 [2024-07-16 00:06:02.270725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.423 [2024-07-16 00:06:02.270962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.423 [2024-07-16 00:06:02.271182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.423 [2024-07-16 00:06:02.271192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.423 [2024-07-16 00:06:02.271200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.423 [2024-07-16 00:06:02.274710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.423 [2024-07-16 00:06:02.283800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.423 [2024-07-16 00:06:02.284518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.423 [2024-07-16 00:06:02.284556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.423 [2024-07-16 00:06:02.284567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.423 [2024-07-16 00:06:02.284803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.423 [2024-07-16 00:06:02.285024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.423 [2024-07-16 00:06:02.285033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.423 [2024-07-16 00:06:02.285042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.423 [2024-07-16 00:06:02.288553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.423 [2024-07-16 00:06:02.297636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.423 [2024-07-16 00:06:02.298247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.423 [2024-07-16 00:06:02.298286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.423 [2024-07-16 00:06:02.298297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.423 [2024-07-16 00:06:02.298538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.423 [2024-07-16 00:06:02.298759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.423 [2024-07-16 00:06:02.298768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.423 [2024-07-16 00:06:02.298776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.423 [2024-07-16 00:06:02.302290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.423 [2024-07-16 00:06:02.311574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.423 [2024-07-16 00:06:02.312249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.423 [2024-07-16 00:06:02.312287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.423 [2024-07-16 00:06:02.312300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.423 [2024-07-16 00:06:02.312538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.423 [2024-07-16 00:06:02.312759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.423 [2024-07-16 00:06:02.312769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.423 [2024-07-16 00:06:02.312776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.423 [2024-07-16 00:06:02.316300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.423 [2024-07-16 00:06:02.325383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.423 [2024-07-16 00:06:02.326055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.423 [2024-07-16 00:06:02.326093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.423 [2024-07-16 00:06:02.326104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.423 [2024-07-16 00:06:02.326350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.423 [2024-07-16 00:06:02.326571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.423 [2024-07-16 00:06:02.326581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.423 [2024-07-16 00:06:02.326589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.423 [2024-07-16 00:06:02.330096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.423 [2024-07-16 00:06:02.339179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.423 [2024-07-16 00:06:02.339901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.423 [2024-07-16 00:06:02.339939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.423 [2024-07-16 00:06:02.339950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.423 [2024-07-16 00:06:02.340187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.423 [2024-07-16 00:06:02.340417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.423 [2024-07-16 00:06:02.340427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.423 [2024-07-16 00:06:02.340439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.423 [2024-07-16 00:06:02.343942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.423 [2024-07-16 00:06:02.353021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.423 [2024-07-16 00:06:02.353710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.424 [2024-07-16 00:06:02.353748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.424 [2024-07-16 00:06:02.353760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.424 [2024-07-16 00:06:02.353996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.424 [2024-07-16 00:06:02.354217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.424 [2024-07-16 00:06:02.354226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.424 [2024-07-16 00:06:02.354244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.424 [2024-07-16 00:06:02.357748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.424 [2024-07-16 00:06:02.366834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.424 [2024-07-16 00:06:02.367552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.424 [2024-07-16 00:06:02.367591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.424 [2024-07-16 00:06:02.367602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.424 [2024-07-16 00:06:02.367839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.424 [2024-07-16 00:06:02.368060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.424 [2024-07-16 00:06:02.368070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.424 [2024-07-16 00:06:02.368077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.424 [2024-07-16 00:06:02.371589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.424 [2024-07-16 00:06:02.380672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.424 [2024-07-16 00:06:02.381359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.424 [2024-07-16 00:06:02.381398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.424 [2024-07-16 00:06:02.381410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.424 [2024-07-16 00:06:02.381648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.424 [2024-07-16 00:06:02.381868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.424 [2024-07-16 00:06:02.381879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.424 [2024-07-16 00:06:02.381887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.424 [2024-07-16 00:06:02.385401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.424 [2024-07-16 00:06:02.394487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.424 [2024-07-16 00:06:02.395059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.424 [2024-07-16 00:06:02.395085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.424 [2024-07-16 00:06:02.395093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.424 [2024-07-16 00:06:02.395317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.424 [2024-07-16 00:06:02.395535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.424 [2024-07-16 00:06:02.395544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.424 [2024-07-16 00:06:02.395551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.424 [2024-07-16 00:06:02.399052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.424 [2024-07-16 00:06:02.408342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.424 [2024-07-16 00:06:02.408793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.424 [2024-07-16 00:06:02.408811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.424 [2024-07-16 00:06:02.408819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.424 [2024-07-16 00:06:02.409036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.424 [2024-07-16 00:06:02.409260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.424 [2024-07-16 00:06:02.409271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.424 [2024-07-16 00:06:02.409279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.424 [2024-07-16 00:06:02.412775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.424 [2024-07-16 00:06:02.422280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.424 [2024-07-16 00:06:02.422983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.424 [2024-07-16 00:06:02.423021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.424 [2024-07-16 00:06:02.423032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.424 [2024-07-16 00:06:02.423278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.424 [2024-07-16 00:06:02.423499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.424 [2024-07-16 00:06:02.423509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.424 [2024-07-16 00:06:02.423516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.424 [2024-07-16 00:06:02.427018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.424 [2024-07-16 00:06:02.436109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.424 [2024-07-16 00:06:02.436803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.424 [2024-07-16 00:06:02.436842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.424 [2024-07-16 00:06:02.436853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.424 [2024-07-16 00:06:02.437090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.424 [2024-07-16 00:06:02.437322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.424 [2024-07-16 00:06:02.437333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.424 [2024-07-16 00:06:02.437340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.424 [2024-07-16 00:06:02.440843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.424 [2024-07-16 00:06:02.449926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.424 [2024-07-16 00:06:02.450508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.424 [2024-07-16 00:06:02.450528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.424 [2024-07-16 00:06:02.450536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.424 [2024-07-16 00:06:02.450753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.424 [2024-07-16 00:06:02.450971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.424 [2024-07-16 00:06:02.450980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.424 [2024-07-16 00:06:02.450987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.424 [2024-07-16 00:06:02.454489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.424 [2024-07-16 00:06:02.463792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.424 [2024-07-16 00:06:02.464543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.424 [2024-07-16 00:06:02.464586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.424 [2024-07-16 00:06:02.464598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.424 [2024-07-16 00:06:02.464842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.424 [2024-07-16 00:06:02.465063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.424 [2024-07-16 00:06:02.465072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.424 [2024-07-16 00:06:02.465080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.424 [2024-07-16 00:06:02.468589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.425 [2024-07-16 00:06:02.477674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.425 [2024-07-16 00:06:02.478673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.425 [2024-07-16 00:06:02.478698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.425 [2024-07-16 00:06:02.478706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.425 [2024-07-16 00:06:02.478930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.425 [2024-07-16 00:06:02.479148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.425 [2024-07-16 00:06:02.479158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.425 [2024-07-16 00:06:02.479165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.425 [2024-07-16 00:06:02.482675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.425 [2024-07-16 00:06:02.491566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.425 [2024-07-16 00:06:02.492274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.425 [2024-07-16 00:06:02.492312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.425 [2024-07-16 00:06:02.492324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.425 [2024-07-16 00:06:02.492560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.425 [2024-07-16 00:06:02.492781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.425 [2024-07-16 00:06:02.492790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.425 [2024-07-16 00:06:02.492798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.425 [2024-07-16 00:06:02.496310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.425 [2024-07-16 00:06:02.505396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.425 [2024-07-16 00:06:02.506016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.425 [2024-07-16 00:06:02.506035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.425 [2024-07-16 00:06:02.506043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.425 [2024-07-16 00:06:02.506266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.425 [2024-07-16 00:06:02.506484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.425 [2024-07-16 00:06:02.506493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.425 [2024-07-16 00:06:02.506500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.425 [2024-07-16 00:06:02.509994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.425 [2024-07-16 00:06:02.519304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.425 [2024-07-16 00:06:02.519912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.425 [2024-07-16 00:06:02.519928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.425 [2024-07-16 00:06:02.519936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.425 [2024-07-16 00:06:02.520153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.425 [2024-07-16 00:06:02.520375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.425 [2024-07-16 00:06:02.520384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.425 [2024-07-16 00:06:02.520391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.425 [2024-07-16 00:06:02.523889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.425 [2024-07-16 00:06:02.533175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.425 [2024-07-16 00:06:02.533833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.425 [2024-07-16 00:06:02.533872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.425 [2024-07-16 00:06:02.533887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.425 [2024-07-16 00:06:02.534124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.425 [2024-07-16 00:06:02.534351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.425 [2024-07-16 00:06:02.534361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.425 [2024-07-16 00:06:02.534369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.425 [2024-07-16 00:06:02.537881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.425 [2024-07-16 00:06:02.546963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.425 [2024-07-16 00:06:02.547528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.425 [2024-07-16 00:06:02.547548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.425 [2024-07-16 00:06:02.547556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.425 [2024-07-16 00:06:02.547774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.425 [2024-07-16 00:06:02.547990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.425 [2024-07-16 00:06:02.547999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.425 [2024-07-16 00:06:02.548006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.425 [2024-07-16 00:06:02.551510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.425 [2024-07-16 00:06:02.560795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.425 [2024-07-16 00:06:02.561330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.425 [2024-07-16 00:06:02.561346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.425 [2024-07-16 00:06:02.561354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.425 [2024-07-16 00:06:02.561570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.425 [2024-07-16 00:06:02.561787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.425 [2024-07-16 00:06:02.561796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.425 [2024-07-16 00:06:02.561803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.425 [2024-07-16 00:06:02.565305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.425 [2024-07-16 00:06:02.574590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.425 [2024-07-16 00:06:02.575163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.425 [2024-07-16 00:06:02.575179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.425 [2024-07-16 00:06:02.575186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.425 [2024-07-16 00:06:02.575408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.425 [2024-07-16 00:06:02.575626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.425 [2024-07-16 00:06:02.575638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.425 [2024-07-16 00:06:02.575645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.426 [2024-07-16 00:06:02.579143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.426 [2024-07-16 00:06:02.588429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.426 [2024-07-16 00:06:02.588991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.426 [2024-07-16 00:06:02.589007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.426 [2024-07-16 00:06:02.589015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.426 [2024-07-16 00:06:02.589235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.426 [2024-07-16 00:06:02.589452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.426 [2024-07-16 00:06:02.589461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.426 [2024-07-16 00:06:02.589468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.426 [2024-07-16 00:06:02.592964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.688 [2024-07-16 00:06:02.602252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.688 [2024-07-16 00:06:02.602816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.688 [2024-07-16 00:06:02.602831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.688 [2024-07-16 00:06:02.602839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.688 [2024-07-16 00:06:02.603055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.688 [2024-07-16 00:06:02.603275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.688 [2024-07-16 00:06:02.603285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.688 [2024-07-16 00:06:02.603292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.688 [2024-07-16 00:06:02.606788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.688 [2024-07-16 00:06:02.616083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.688 [2024-07-16 00:06:02.616782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.688 [2024-07-16 00:06:02.616821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.688 [2024-07-16 00:06:02.616834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.688 [2024-07-16 00:06:02.617072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.688 [2024-07-16 00:06:02.617301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.688 [2024-07-16 00:06:02.617311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.688 [2024-07-16 00:06:02.617318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.688 [2024-07-16 00:06:02.620824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.688 [2024-07-16 00:06:02.629916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.688 [2024-07-16 00:06:02.630560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.688 [2024-07-16 00:06:02.630599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.688 [2024-07-16 00:06:02.630610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.688 [2024-07-16 00:06:02.630847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.688 [2024-07-16 00:06:02.631068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.688 [2024-07-16 00:06:02.631077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.688 [2024-07-16 00:06:02.631084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.688 [2024-07-16 00:06:02.634593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.688 [2024-07-16 00:06:02.643687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.688 [2024-07-16 00:06:02.644318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.688 [2024-07-16 00:06:02.644337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.688 [2024-07-16 00:06:02.644345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.688 [2024-07-16 00:06:02.644562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.688 [2024-07-16 00:06:02.644780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.688 [2024-07-16 00:06:02.644789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.688 [2024-07-16 00:06:02.644796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.688 [2024-07-16 00:06:02.648301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.688 [2024-07-16 00:06:02.657585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.688 [2024-07-16 00:06:02.658259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.688 [2024-07-16 00:06:02.658297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.688 [2024-07-16 00:06:02.658308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.688 [2024-07-16 00:06:02.658545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.689 [2024-07-16 00:06:02.658766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.689 [2024-07-16 00:06:02.658776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.689 [2024-07-16 00:06:02.658784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.689 [2024-07-16 00:06:02.662296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.689 [2024-07-16 00:06:02.671382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.689 [2024-07-16 00:06:02.671935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.689 [2024-07-16 00:06:02.671973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.689 [2024-07-16 00:06:02.671984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.689 [2024-07-16 00:06:02.672225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.689 [2024-07-16 00:06:02.672454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.689 [2024-07-16 00:06:02.672464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.689 [2024-07-16 00:06:02.672472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.689 [2024-07-16 00:06:02.675972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.689 [2024-07-16 00:06:02.685264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.689 [2024-07-16 00:06:02.685942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.689 [2024-07-16 00:06:02.685980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.689 [2024-07-16 00:06:02.685991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.689 [2024-07-16 00:06:02.686228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.689 [2024-07-16 00:06:02.686457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.689 [2024-07-16 00:06:02.686466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.689 [2024-07-16 00:06:02.686474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.689 [2024-07-16 00:06:02.689976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.689 [2024-07-16 00:06:02.699062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.689 [2024-07-16 00:06:02.699744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.689 [2024-07-16 00:06:02.699782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.689 [2024-07-16 00:06:02.699793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.689 [2024-07-16 00:06:02.700030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.689 [2024-07-16 00:06:02.700258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.689 [2024-07-16 00:06:02.700268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.689 [2024-07-16 00:06:02.700276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.689 [2024-07-16 00:06:02.703787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.689 [2024-07-16 00:06:02.712931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.689 [2024-07-16 00:06:02.713585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.689 [2024-07-16 00:06:02.713624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.689 [2024-07-16 00:06:02.713635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.689 [2024-07-16 00:06:02.713871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.689 [2024-07-16 00:06:02.714092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.689 [2024-07-16 00:06:02.714102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.689 [2024-07-16 00:06:02.714114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.689 [2024-07-16 00:06:02.717637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.689 [2024-07-16 00:06:02.726723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.689 [2024-07-16 00:06:02.727341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.689 [2024-07-16 00:06:02.727387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.689 [2024-07-16 00:06:02.727400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.689 [2024-07-16 00:06:02.727640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.689 [2024-07-16 00:06:02.727860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.689 [2024-07-16 00:06:02.727869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.689 [2024-07-16 00:06:02.727877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.689 [2024-07-16 00:06:02.731388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.689 [2024-07-16 00:06:02.740471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.689 [2024-07-16 00:06:02.741081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.689 [2024-07-16 00:06:02.741100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.689 [2024-07-16 00:06:02.741108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.689 [2024-07-16 00:06:02.741331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.689 [2024-07-16 00:06:02.741549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.689 [2024-07-16 00:06:02.741558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.689 [2024-07-16 00:06:02.741565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.689 [2024-07-16 00:06:02.745065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.689 [2024-07-16 00:06:02.754357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.689 [2024-07-16 00:06:02.755063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.689 [2024-07-16 00:06:02.755101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.689 [2024-07-16 00:06:02.755112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.689 [2024-07-16 00:06:02.755357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.689 [2024-07-16 00:06:02.755578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.689 [2024-07-16 00:06:02.755588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.689 [2024-07-16 00:06:02.755595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.689 [2024-07-16 00:06:02.759099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.689 [2024-07-16 00:06:02.768183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.689 [2024-07-16 00:06:02.768839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.689 [2024-07-16 00:06:02.768881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.689 [2024-07-16 00:06:02.768893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.689 [2024-07-16 00:06:02.769130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.689 [2024-07-16 00:06:02.769358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.689 [2024-07-16 00:06:02.769368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.689 [2024-07-16 00:06:02.769376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.689 [2024-07-16 00:06:02.772879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.689 [2024-07-16 00:06:02.781963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.689 [2024-07-16 00:06:02.782637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.690 [2024-07-16 00:06:02.782675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.690 [2024-07-16 00:06:02.782686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.690 [2024-07-16 00:06:02.782923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.690 [2024-07-16 00:06:02.783143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.690 [2024-07-16 00:06:02.783153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.690 [2024-07-16 00:06:02.783161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.690 [2024-07-16 00:06:02.786672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.690 [2024-07-16 00:06:02.795757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.690 [2024-07-16 00:06:02.796448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.690 [2024-07-16 00:06:02.796486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.690 [2024-07-16 00:06:02.796499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.690 [2024-07-16 00:06:02.796737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.690 [2024-07-16 00:06:02.796957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.690 [2024-07-16 00:06:02.796967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.690 [2024-07-16 00:06:02.796974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.690 [2024-07-16 00:06:02.800484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.690 [2024-07-16 00:06:02.809568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.690 [2024-07-16 00:06:02.810288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.690 [2024-07-16 00:06:02.810327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.690 [2024-07-16 00:06:02.810339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.690 [2024-07-16 00:06:02.810578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.690 [2024-07-16 00:06:02.810802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.690 [2024-07-16 00:06:02.810812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.690 [2024-07-16 00:06:02.810820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.690 [2024-07-16 00:06:02.814340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.690 [2024-07-16 00:06:02.823431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.690 [2024-07-16 00:06:02.824126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.690 [2024-07-16 00:06:02.824164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.690 [2024-07-16 00:06:02.824175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.690 [2024-07-16 00:06:02.824419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.690 [2024-07-16 00:06:02.824641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.690 [2024-07-16 00:06:02.824650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.690 [2024-07-16 00:06:02.824657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.690 [2024-07-16 00:06:02.828161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.690 [2024-07-16 00:06:02.837251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.690 [2024-07-16 00:06:02.837899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.690 [2024-07-16 00:06:02.837937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.690 [2024-07-16 00:06:02.837948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.690 [2024-07-16 00:06:02.838184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.690 [2024-07-16 00:06:02.838413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.690 [2024-07-16 00:06:02.838424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.690 [2024-07-16 00:06:02.838432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.690 [2024-07-16 00:06:02.841936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.690 [2024-07-16 00:06:02.851025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.690 [2024-07-16 00:06:02.851780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.690 [2024-07-16 00:06:02.851818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.690 [2024-07-16 00:06:02.851829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.690 [2024-07-16 00:06:02.852066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.690 [2024-07-16 00:06:02.852295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.690 [2024-07-16 00:06:02.852305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.690 [2024-07-16 00:06:02.852313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.690 [2024-07-16 00:06:02.855820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.690 [2024-07-16 00:06:02.864911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.690 [2024-07-16 00:06:02.865479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.690 [2024-07-16 00:06:02.865499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.690 [2024-07-16 00:06:02.865507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.690 [2024-07-16 00:06:02.865725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.690 [2024-07-16 00:06:02.865942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.690 [2024-07-16 00:06:02.865951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.690 [2024-07-16 00:06:02.865958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.690 [2024-07-16 00:06:02.869464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.953 [2024-07-16 00:06:02.878753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.953 [2024-07-16 00:06:02.879360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.953 [2024-07-16 00:06:02.879398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.954 [2024-07-16 00:06:02.879412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.954 [2024-07-16 00:06:02.879651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.954 [2024-07-16 00:06:02.879873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.954 [2024-07-16 00:06:02.879883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.954 [2024-07-16 00:06:02.879891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.954 [2024-07-16 00:06:02.883406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.954 [2024-07-16 00:06:02.892696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.954 [2024-07-16 00:06:02.893488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.954 [2024-07-16 00:06:02.893527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.954 [2024-07-16 00:06:02.893538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.954 [2024-07-16 00:06:02.893775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.954 [2024-07-16 00:06:02.893995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.954 [2024-07-16 00:06:02.894005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.954 [2024-07-16 00:06:02.894013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.954 [2024-07-16 00:06:02.897527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.954 [2024-07-16 00:06:02.906616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.954 [2024-07-16 00:06:02.907342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.954 [2024-07-16 00:06:02.907381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.954 [2024-07-16 00:06:02.907398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.954 [2024-07-16 00:06:02.907639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.954 [2024-07-16 00:06:02.907859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.954 [2024-07-16 00:06:02.907869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.954 [2024-07-16 00:06:02.907877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.954 [2024-07-16 00:06:02.911490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.954 [2024-07-16 00:06:02.920389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.954 [2024-07-16 00:06:02.921065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.954 [2024-07-16 00:06:02.921103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.954 [2024-07-16 00:06:02.921115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.954 [2024-07-16 00:06:02.921360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.954 [2024-07-16 00:06:02.921582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.954 [2024-07-16 00:06:02.921592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.954 [2024-07-16 00:06:02.921599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.954 [2024-07-16 00:06:02.925103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.954 [2024-07-16 00:06:02.934187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.954 [2024-07-16 00:06:02.934889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.954 [2024-07-16 00:06:02.934927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.954 [2024-07-16 00:06:02.934938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.954 [2024-07-16 00:06:02.935175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.954 [2024-07-16 00:06:02.935405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.954 [2024-07-16 00:06:02.935415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.954 [2024-07-16 00:06:02.935423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.954 [2024-07-16 00:06:02.938927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.954 [2024-07-16 00:06:02.948013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.954 [2024-07-16 00:06:02.948639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.954 [2024-07-16 00:06:02.948678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.954 [2024-07-16 00:06:02.948689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.954 [2024-07-16 00:06:02.948926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.954 [2024-07-16 00:06:02.949147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.954 [2024-07-16 00:06:02.949161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.954 [2024-07-16 00:06:02.949169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.954 [2024-07-16 00:06:02.952683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.954 [2024-07-16 00:06:02.961772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.954 [2024-07-16 00:06:02.962279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.954 [2024-07-16 00:06:02.962299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.954 [2024-07-16 00:06:02.962308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.954 [2024-07-16 00:06:02.962526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.954 [2024-07-16 00:06:02.962743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.954 [2024-07-16 00:06:02.962752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.954 [2024-07-16 00:06:02.962759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.954 [2024-07-16 00:06:02.966263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.954 [2024-07-16 00:06:02.975553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.954 [2024-07-16 00:06:02.976160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.954 [2024-07-16 00:06:02.976176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.954 [2024-07-16 00:06:02.976184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.954 [2024-07-16 00:06:02.976405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.954 [2024-07-16 00:06:02.976623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.954 [2024-07-16 00:06:02.976633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.954 [2024-07-16 00:06:02.976640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.954 [2024-07-16 00:06:02.980137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.954 [2024-07-16 00:06:02.989428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.954 [2024-07-16 00:06:02.989998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.954 [2024-07-16 00:06:02.990014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.954 [2024-07-16 00:06:02.990022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.954 [2024-07-16 00:06:02.990244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.954 [2024-07-16 00:06:02.990461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.954 [2024-07-16 00:06:02.990470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.954 [2024-07-16 00:06:02.990477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.954 [2024-07-16 00:06:02.993974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.955 [2024-07-16 00:06:03.003269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.955 [2024-07-16 00:06:03.003973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.955 [2024-07-16 00:06:03.004012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.955 [2024-07-16 00:06:03.004023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.955 [2024-07-16 00:06:03.004268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.955 [2024-07-16 00:06:03.004490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.955 [2024-07-16 00:06:03.004500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.955 [2024-07-16 00:06:03.004508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.955 [2024-07-16 00:06:03.008011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.955 [2024-07-16 00:06:03.017108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.955 [2024-07-16 00:06:03.017783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.955 [2024-07-16 00:06:03.017821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.955 [2024-07-16 00:06:03.017833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.955 [2024-07-16 00:06:03.018069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.955 [2024-07-16 00:06:03.018298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.955 [2024-07-16 00:06:03.018308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.955 [2024-07-16 00:06:03.018316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.955 [2024-07-16 00:06:03.021820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.955 [2024-07-16 00:06:03.030923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.955 [2024-07-16 00:06:03.031476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.955 [2024-07-16 00:06:03.031497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.955 [2024-07-16 00:06:03.031505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.955 [2024-07-16 00:06:03.031722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.955 [2024-07-16 00:06:03.031940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.955 [2024-07-16 00:06:03.031949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.955 [2024-07-16 00:06:03.031956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.955 [2024-07-16 00:06:03.035458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.955 [2024-07-16 00:06:03.044743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.955 [2024-07-16 00:06:03.045326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.955 [2024-07-16 00:06:03.045343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.955 [2024-07-16 00:06:03.045351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.955 [2024-07-16 00:06:03.045571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.955 [2024-07-16 00:06:03.045788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.955 [2024-07-16 00:06:03.045797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.955 [2024-07-16 00:06:03.045804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.955 [2024-07-16 00:06:03.049306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.955 [2024-07-16 00:06:03.058593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.955 [2024-07-16 00:06:03.059285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.955 [2024-07-16 00:06:03.059324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.955 [2024-07-16 00:06:03.059336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.955 [2024-07-16 00:06:03.059577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.955 [2024-07-16 00:06:03.059798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.955 [2024-07-16 00:06:03.059808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.955 [2024-07-16 00:06:03.059815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.955 [2024-07-16 00:06:03.063327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.955 [2024-07-16 00:06:03.072410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.955 [2024-07-16 00:06:03.073027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.955 [2024-07-16 00:06:03.073046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.955 [2024-07-16 00:06:03.073054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.955 [2024-07-16 00:06:03.073277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.955 [2024-07-16 00:06:03.073495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.955 [2024-07-16 00:06:03.073504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.955 [2024-07-16 00:06:03.073512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.955 [2024-07-16 00:06:03.077008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.955 [2024-07-16 00:06:03.086295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.955 [2024-07-16 00:06:03.086897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.955 [2024-07-16 00:06:03.086913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.955 [2024-07-16 00:06:03.086921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.955 [2024-07-16 00:06:03.087137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.955 [2024-07-16 00:06:03.087360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.955 [2024-07-16 00:06:03.087370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.955 [2024-07-16 00:06:03.087381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.955 [2024-07-16 00:06:03.091033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.955 [2024-07-16 00:06:03.100117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.955 [2024-07-16 00:06:03.100708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.955 [2024-07-16 00:06:03.100725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.955 [2024-07-16 00:06:03.100733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.955 [2024-07-16 00:06:03.100950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.955 [2024-07-16 00:06:03.101167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.955 [2024-07-16 00:06:03.101175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.955 [2024-07-16 00:06:03.101182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.955 [2024-07-16 00:06:03.104681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.955 [2024-07-16 00:06:03.113965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.955 [2024-07-16 00:06:03.114650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.955 [2024-07-16 00:06:03.114689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.955 [2024-07-16 00:06:03.114699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.955 [2024-07-16 00:06:03.114936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.956 [2024-07-16 00:06:03.115157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.956 [2024-07-16 00:06:03.115166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.956 [2024-07-16 00:06:03.115174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.956 [2024-07-16 00:06:03.118690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.956 [2024-07-16 00:06:03.127780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:47.956 [2024-07-16 00:06:03.128523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-16 00:06:03.128562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:47.956 [2024-07-16 00:06:03.128573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:47.956 [2024-07-16 00:06:03.128809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:47.956 [2024-07-16 00:06:03.129030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:47.956 [2024-07-16 00:06:03.129039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:47.956 [2024-07-16 00:06:03.129047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:47.956 [2024-07-16 00:06:03.132561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:47.956 [2024-07-16 00:06:03.141862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.142579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.142626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.222 [2024-07-16 00:06:03.142637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.222 [2024-07-16 00:06:03.142874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.222 [2024-07-16 00:06:03.143095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.222 [2024-07-16 00:06:03.143104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.222 [2024-07-16 00:06:03.143112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.222 [2024-07-16 00:06:03.146627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.222 [2024-07-16 00:06:03.155713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.156341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.156379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.222 [2024-07-16 00:06:03.156390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.222 [2024-07-16 00:06:03.156627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.222 [2024-07-16 00:06:03.156847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.222 [2024-07-16 00:06:03.156856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.222 [2024-07-16 00:06:03.156865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.222 [2024-07-16 00:06:03.160377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.222 [2024-07-16 00:06:03.169460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.170079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.170098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.222 [2024-07-16 00:06:03.170106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.222 [2024-07-16 00:06:03.170330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.222 [2024-07-16 00:06:03.170549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.222 [2024-07-16 00:06:03.170558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.222 [2024-07-16 00:06:03.170565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.222 [2024-07-16 00:06:03.174061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.222 [2024-07-16 00:06:03.183352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.183959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.183975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.222 [2024-07-16 00:06:03.183983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.222 [2024-07-16 00:06:03.184200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.222 [2024-07-16 00:06:03.184426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.222 [2024-07-16 00:06:03.184436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.222 [2024-07-16 00:06:03.184443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.222 [2024-07-16 00:06:03.187938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.222 [2024-07-16 00:06:03.197221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.197819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.197836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.222 [2024-07-16 00:06:03.197843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.222 [2024-07-16 00:06:03.198060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.222 [2024-07-16 00:06:03.198282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.222 [2024-07-16 00:06:03.198291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.222 [2024-07-16 00:06:03.198298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.222 [2024-07-16 00:06:03.201794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.222 [2024-07-16 00:06:03.211075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.211769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.211807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.222 [2024-07-16 00:06:03.211818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.222 [2024-07-16 00:06:03.212055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.222 [2024-07-16 00:06:03.212286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.222 [2024-07-16 00:06:03.212295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.222 [2024-07-16 00:06:03.212303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.222 [2024-07-16 00:06:03.215816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.222 [2024-07-16 00:06:03.224940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.225634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.225673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.222 [2024-07-16 00:06:03.225684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.222 [2024-07-16 00:06:03.225920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.222 [2024-07-16 00:06:03.226142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.222 [2024-07-16 00:06:03.226151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.222 [2024-07-16 00:06:03.226158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.222 [2024-07-16 00:06:03.229674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.222 [2024-07-16 00:06:03.238758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.239495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.239533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.222 [2024-07-16 00:06:03.239544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.222 [2024-07-16 00:06:03.239781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.222 [2024-07-16 00:06:03.240002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.222 [2024-07-16 00:06:03.240011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.222 [2024-07-16 00:06:03.240019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.222 [2024-07-16 00:06:03.243530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.222 [2024-07-16 00:06:03.252610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.253277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.253316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.222 [2024-07-16 00:06:03.253327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.222 [2024-07-16 00:06:03.253563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.222 [2024-07-16 00:06:03.253784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.222 [2024-07-16 00:06:03.253793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.222 [2024-07-16 00:06:03.253800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.222 [2024-07-16 00:06:03.257314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.222 [2024-07-16 00:06:03.266398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.222 [2024-07-16 00:06:03.267063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.222 [2024-07-16 00:06:03.267100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.267111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.267356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.267578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.267588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.267595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.271099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.280183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.280854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.280892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.280908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.281144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.281373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.281383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.281391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.284895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.293977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.294658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.294697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.294708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.294944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.295165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.295174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.295182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.298693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.307776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.308528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.308566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.308577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.308814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.309035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.309044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.309051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.312563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.321656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.322354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.322393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.322406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.322644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.322864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.322878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.322885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.326397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.335479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.336169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.336207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.336218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.336463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.336684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.336694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.336702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.340203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.349289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.349979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.350018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.350029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.350275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.350497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.350506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.350513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.354018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.363103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.363775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.363814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.363825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.364061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.364291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.364301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.364309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.367810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.376898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.377552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.377591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.377602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.377838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.378058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.378068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.378075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.381588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.390671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.391328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.391366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.391379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.391617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.391838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.391847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.391855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.223 [2024-07-16 00:06:03.395367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.223 [2024-07-16 00:06:03.404455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.223 [2024-07-16 00:06:03.405161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.223 [2024-07-16 00:06:03.405199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.223 [2024-07-16 00:06:03.405211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.223 [2024-07-16 00:06:03.405458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.223 [2024-07-16 00:06:03.405680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.223 [2024-07-16 00:06:03.405690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.223 [2024-07-16 00:06:03.405698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.484 [2024-07-16 00:06:03.409200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.484 [2024-07-16 00:06:03.418302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.484 [2024-07-16 00:06:03.419016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.484 [2024-07-16 00:06:03.419055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.484 [2024-07-16 00:06:03.419066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.484 [2024-07-16 00:06:03.419316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.484 [2024-07-16 00:06:03.419538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.484 [2024-07-16 00:06:03.419547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.484 [2024-07-16 00:06:03.419555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.484 [2024-07-16 00:06:03.423059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.484 [2024-07-16 00:06:03.432139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.484 [2024-07-16 00:06:03.432792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.484 [2024-07-16 00:06:03.432830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.484 [2024-07-16 00:06:03.432841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.484 [2024-07-16 00:06:03.433078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.484 [2024-07-16 00:06:03.433308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.484 [2024-07-16 00:06:03.433318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.484 [2024-07-16 00:06:03.433326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.485 [2024-07-16 00:06:03.436828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.485 [2024-07-16 00:06:03.445911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.485 [2024-07-16 00:06:03.446590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.485 [2024-07-16 00:06:03.446628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.485 [2024-07-16 00:06:03.446639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.485 [2024-07-16 00:06:03.446876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.485 [2024-07-16 00:06:03.447096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.485 [2024-07-16 00:06:03.447105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.485 [2024-07-16 00:06:03.447113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.485 [2024-07-16 00:06:03.450625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.485 [2024-07-16 00:06:03.459707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.485 [2024-07-16 00:06:03.460363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.485 [2024-07-16 00:06:03.460401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.485 [2024-07-16 00:06:03.460412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.485 [2024-07-16 00:06:03.460649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.485 [2024-07-16 00:06:03.460870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.485 [2024-07-16 00:06:03.460879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.485 [2024-07-16 00:06:03.460891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.485 [2024-07-16 00:06:03.464404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.485 [2024-07-16 00:06:03.473489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.485 [2024-07-16 00:06:03.474199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.485 [2024-07-16 00:06:03.474243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.485 [2024-07-16 00:06:03.474257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.485 [2024-07-16 00:06:03.474494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.485 [2024-07-16 00:06:03.474715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.485 [2024-07-16 00:06:03.474724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.485 [2024-07-16 00:06:03.474732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.485 [2024-07-16 00:06:03.478241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.485 [2024-07-16 00:06:03.487327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.485 [2024-07-16 00:06:03.488031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.485 [2024-07-16 00:06:03.488070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.485 [2024-07-16 00:06:03.488081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.485 [2024-07-16 00:06:03.488327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.485 [2024-07-16 00:06:03.488549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.485 [2024-07-16 00:06:03.488558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.485 [2024-07-16 00:06:03.488566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.485 [2024-07-16 00:06:03.492067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.485 [2024-07-16 00:06:03.501148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.485 [2024-07-16 00:06:03.501893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.485 [2024-07-16 00:06:03.501931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.485 [2024-07-16 00:06:03.501942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.485 [2024-07-16 00:06:03.502179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.485 [2024-07-16 00:06:03.502408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.485 [2024-07-16 00:06:03.502419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.485 [2024-07-16 00:06:03.502427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.485 [2024-07-16 00:06:03.505928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.485 [2024-07-16 00:06:03.515009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.485 [2024-07-16 00:06:03.515712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.485 [2024-07-16 00:06:03.515754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.485 [2024-07-16 00:06:03.515765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.485 [2024-07-16 00:06:03.516002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.485 [2024-07-16 00:06:03.516223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.485 [2024-07-16 00:06:03.516242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.485 [2024-07-16 00:06:03.516251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.485 [2024-07-16 00:06:03.519754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.485 [2024-07-16 00:06:03.528842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.485 [2024-07-16 00:06:03.529539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.485 [2024-07-16 00:06:03.529577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.485 [2024-07-16 00:06:03.529588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.485 [2024-07-16 00:06:03.529824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.485 [2024-07-16 00:06:03.530045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.485 [2024-07-16 00:06:03.530054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.485 [2024-07-16 00:06:03.530062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.485 [2024-07-16 00:06:03.533574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.485 [2024-07-16 00:06:03.542657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.485 [2024-07-16 00:06:03.543328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.485 [2024-07-16 00:06:03.543366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.485 [2024-07-16 00:06:03.543379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.485 [2024-07-16 00:06:03.543619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.485 [2024-07-16 00:06:03.543840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.485 [2024-07-16 00:06:03.543849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.485 [2024-07-16 00:06:03.543857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.485 [2024-07-16 00:06:03.547370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.485 [2024-07-16 00:06:03.556451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.485 [2024-07-16 00:06:03.557156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.485 [2024-07-16 00:06:03.557194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.485 [2024-07-16 00:06:03.557206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.485 [2024-07-16 00:06:03.557454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.486 [2024-07-16 00:06:03.557680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.486 [2024-07-16 00:06:03.557690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.486 [2024-07-16 00:06:03.557698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.486 [2024-07-16 00:06:03.561200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.486 [2024-07-16 00:06:03.570294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.486 [2024-07-16 00:06:03.570962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.486 [2024-07-16 00:06:03.571000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.486 [2024-07-16 00:06:03.571011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.486 [2024-07-16 00:06:03.571257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.486 [2024-07-16 00:06:03.571478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.486 [2024-07-16 00:06:03.571488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.486 [2024-07-16 00:06:03.571496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.486 [2024-07-16 00:06:03.575001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.486 [2024-07-16 00:06:03.584088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.486 [2024-07-16 00:06:03.584798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.486 [2024-07-16 00:06:03.584836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.486 [2024-07-16 00:06:03.584847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.486 [2024-07-16 00:06:03.585084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.486 [2024-07-16 00:06:03.585314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.486 [2024-07-16 00:06:03.585324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.486 [2024-07-16 00:06:03.585332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.486 [2024-07-16 00:06:03.588835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.486 [2024-07-16 00:06:03.597919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.486 [2024-07-16 00:06:03.598591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.486 [2024-07-16 00:06:03.598630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.486 [2024-07-16 00:06:03.598641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.486 [2024-07-16 00:06:03.598877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.486 [2024-07-16 00:06:03.599098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.486 [2024-07-16 00:06:03.599107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.486 [2024-07-16 00:06:03.599115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.486 [2024-07-16 00:06:03.602630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.486 [2024-07-16 00:06:03.611710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.486 [2024-07-16 00:06:03.612345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.486 [2024-07-16 00:06:03.612384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.486 [2024-07-16 00:06:03.612396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.486 [2024-07-16 00:06:03.612632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.486 [2024-07-16 00:06:03.612853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.486 [2024-07-16 00:06:03.612862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.486 [2024-07-16 00:06:03.612870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.486 [2024-07-16 00:06:03.616392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.486 [2024-07-16 00:06:03.625477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.486 [2024-07-16 00:06:03.626124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.486 [2024-07-16 00:06:03.626162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.486 [2024-07-16 00:06:03.626173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.486 [2024-07-16 00:06:03.626419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.486 [2024-07-16 00:06:03.626640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.486 [2024-07-16 00:06:03.626649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.486 [2024-07-16 00:06:03.626657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.486 [2024-07-16 00:06:03.630159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.486 [2024-07-16 00:06:03.639244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.486 [2024-07-16 00:06:03.639951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.486 [2024-07-16 00:06:03.639989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.486 [2024-07-16 00:06:03.640000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.486 [2024-07-16 00:06:03.640245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.486 [2024-07-16 00:06:03.640467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.486 [2024-07-16 00:06:03.640476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.486 [2024-07-16 00:06:03.640484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.486 [2024-07-16 00:06:03.643984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.486 [2024-07-16 00:06:03.653067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.486 [2024-07-16 00:06:03.653741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.486 [2024-07-16 00:06:03.653780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.486 [2024-07-16 00:06:03.653795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.486 [2024-07-16 00:06:03.654032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.486 [2024-07-16 00:06:03.654261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.486 [2024-07-16 00:06:03.654271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.486 [2024-07-16 00:06:03.654278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.486 [2024-07-16 00:06:03.657786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.486 [2024-07-16 00:06:03.666868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.486 [2024-07-16 00:06:03.667558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.486 [2024-07-16 00:06:03.667597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.486 [2024-07-16 00:06:03.667607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.486 [2024-07-16 00:06:03.667844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.486 [2024-07-16 00:06:03.668065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.486 [2024-07-16 00:06:03.668074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.486 [2024-07-16 00:06:03.668082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.486 [2024-07-16 00:06:03.671593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.748 [2024-07-16 00:06:03.680681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.748 [2024-07-16 00:06:03.681487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-07-16 00:06:03.681526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.748 [2024-07-16 00:06:03.681538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.748 [2024-07-16 00:06:03.681774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.748 [2024-07-16 00:06:03.681995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.748 [2024-07-16 00:06:03.682004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.748 [2024-07-16 00:06:03.682012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.748 [2024-07-16 00:06:03.685525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.748 [2024-07-16 00:06:03.694607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.748 [2024-07-16 00:06:03.695306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.748 [2024-07-16 00:06:03.695344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.748 [2024-07-16 00:06:03.695358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.748 [2024-07-16 00:06:03.695595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.748 [2024-07-16 00:06:03.695816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.748 [2024-07-16 00:06:03.695830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.748 [2024-07-16 00:06:03.695838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.748 [2024-07-16 00:06:03.699351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.748 [2024-07-16 00:06:03.708433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.749 [2024-07-16 00:06:03.709128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-07-16 00:06:03.709166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.749 [2024-07-16 00:06:03.709177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.749 [2024-07-16 00:06:03.709422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.749 [2024-07-16 00:06:03.709644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.749 [2024-07-16 00:06:03.709653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.749 [2024-07-16 00:06:03.709661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.749 [2024-07-16 00:06:03.713163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.749 [2024-07-16 00:06:03.722259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.749 [2024-07-16 00:06:03.722967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-07-16 00:06:03.723006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.749 [2024-07-16 00:06:03.723017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.749 [2024-07-16 00:06:03.723262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.749 [2024-07-16 00:06:03.723483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.749 [2024-07-16 00:06:03.723492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.749 [2024-07-16 00:06:03.723500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.749 [2024-07-16 00:06:03.727002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.749 [2024-07-16 00:06:03.736087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.749 [2024-07-16 00:06:03.736775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-07-16 00:06:03.736813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.749 [2024-07-16 00:06:03.736824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.749 [2024-07-16 00:06:03.737060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.749 [2024-07-16 00:06:03.737291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.749 [2024-07-16 00:06:03.737301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.749 [2024-07-16 00:06:03.737309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.749 [2024-07-16 00:06:03.740812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.749 [2024-07-16 00:06:03.749902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.749 [2024-07-16 00:06:03.750580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-07-16 00:06:03.750619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.749 [2024-07-16 00:06:03.750630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.749 [2024-07-16 00:06:03.750867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.749 [2024-07-16 00:06:03.751087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.749 [2024-07-16 00:06:03.751097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.749 [2024-07-16 00:06:03.751104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.749 [2024-07-16 00:06:03.754618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.749 [2024-07-16 00:06:03.763700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.749 [2024-07-16 00:06:03.764335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-07-16 00:06:03.764373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.749 [2024-07-16 00:06:03.764385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.749 [2024-07-16 00:06:03.764622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.749 [2024-07-16 00:06:03.764843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.749 [2024-07-16 00:06:03.764852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.749 [2024-07-16 00:06:03.764860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.749 [2024-07-16 00:06:03.768373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.749 [2024-07-16 00:06:03.777463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.749 [2024-07-16 00:06:03.778173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-07-16 00:06:03.778211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.749 [2024-07-16 00:06:03.778222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.749 [2024-07-16 00:06:03.778467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.749 [2024-07-16 00:06:03.778688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.749 [2024-07-16 00:06:03.778698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.749 [2024-07-16 00:06:03.778705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.749 [2024-07-16 00:06:03.782209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.749 [2024-07-16 00:06:03.791299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.749 [2024-07-16 00:06:03.792005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-07-16 00:06:03.792044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.749 [2024-07-16 00:06:03.792055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.749 [2024-07-16 00:06:03.792308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.749 [2024-07-16 00:06:03.792530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.749 [2024-07-16 00:06:03.792539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.749 [2024-07-16 00:06:03.792546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.749 [2024-07-16 00:06:03.796054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.749 [2024-07-16 00:06:03.805143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.749 [2024-07-16 00:06:03.805856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-07-16 00:06:03.805895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.749 [2024-07-16 00:06:03.805906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.749 [2024-07-16 00:06:03.806143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.749 [2024-07-16 00:06:03.806372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.749 [2024-07-16 00:06:03.806382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.749 [2024-07-16 00:06:03.806390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.749 [2024-07-16 00:06:03.809900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.749 [2024-07-16 00:06:03.818997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.749 [2024-07-16 00:06:03.819720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.749 [2024-07-16 00:06:03.819758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.749 [2024-07-16 00:06:03.819769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.749 [2024-07-16 00:06:03.820006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.749 [2024-07-16 00:06:03.820227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.749 [2024-07-16 00:06:03.820247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.750 [2024-07-16 00:06:03.820255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.750 [2024-07-16 00:06:03.823760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.750 [2024-07-16 00:06:03.832848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.750 [2024-07-16 00:06:03.833507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-07-16 00:06:03.833545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.750 [2024-07-16 00:06:03.833556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.750 [2024-07-16 00:06:03.833793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.750 [2024-07-16 00:06:03.834014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.750 [2024-07-16 00:06:03.834023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.750 [2024-07-16 00:06:03.834038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.750 [2024-07-16 00:06:03.837555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.750 [2024-07-16 00:06:03.846643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.750 [2024-07-16 00:06:03.847349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-07-16 00:06:03.847387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.750 [2024-07-16 00:06:03.847398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.750 [2024-07-16 00:06:03.847635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.750 [2024-07-16 00:06:03.847856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.750 [2024-07-16 00:06:03.847865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.750 [2024-07-16 00:06:03.847873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.750 [2024-07-16 00:06:03.851386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.750 [2024-07-16 00:06:03.860473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.750 [2024-07-16 00:06:03.861126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-07-16 00:06:03.861164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.750 [2024-07-16 00:06:03.861175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.750 [2024-07-16 00:06:03.861423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.750 [2024-07-16 00:06:03.861646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.750 [2024-07-16 00:06:03.861655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.750 [2024-07-16 00:06:03.861662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.750 [2024-07-16 00:06:03.865165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.750 [2024-07-16 00:06:03.874252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.750 [2024-07-16 00:06:03.874970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-07-16 00:06:03.875009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.750 [2024-07-16 00:06:03.875019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.750 [2024-07-16 00:06:03.875267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.750 [2024-07-16 00:06:03.875489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.750 [2024-07-16 00:06:03.875498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.750 [2024-07-16 00:06:03.875506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.750 [2024-07-16 00:06:03.879009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.750 [2024-07-16 00:06:03.888103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.750 [2024-07-16 00:06:03.888770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-07-16 00:06:03.888812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.750 [2024-07-16 00:06:03.888824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.750 [2024-07-16 00:06:03.889060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.750 [2024-07-16 00:06:03.889293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.750 [2024-07-16 00:06:03.889304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.750 [2024-07-16 00:06:03.889312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.750 [2024-07-16 00:06:03.892814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.750 [2024-07-16 00:06:03.901905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.750 [2024-07-16 00:06:03.902619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-07-16 00:06:03.902658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.750 [2024-07-16 00:06:03.902669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.750 [2024-07-16 00:06:03.902906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.750 [2024-07-16 00:06:03.903127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.750 [2024-07-16 00:06:03.903136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.750 [2024-07-16 00:06:03.903144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.750 [2024-07-16 00:06:03.906656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.750 [2024-07-16 00:06:03.915756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.750 [2024-07-16 00:06:03.916481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-07-16 00:06:03.916519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.750 [2024-07-16 00:06:03.916530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.750 [2024-07-16 00:06:03.916767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.750 [2024-07-16 00:06:03.916988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.750 [2024-07-16 00:06:03.916997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.750 [2024-07-16 00:06:03.917005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.750 [2024-07-16 00:06:03.920513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.750 [2024-07-16 00:06:03.929590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.750 [2024-07-16 00:06:03.930305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.750 [2024-07-16 00:06:03.930343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:48.750 [2024-07-16 00:06:03.930356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:48.750 [2024-07-16 00:06:03.930594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:48.750 [2024-07-16 00:06:03.930819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.750 [2024-07-16 00:06:03.930829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.750 [2024-07-16 00:06:03.930837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.750 [2024-07-16 00:06:03.934350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.012 [2024-07-16 00:06:03.943436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.012 [2024-07-16 00:06:03.944147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.012 [2024-07-16 00:06:03.944185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.012 [2024-07-16 00:06:03.944197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.012 [2024-07-16 00:06:03.944446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.012 [2024-07-16 00:06:03.944667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.012 [2024-07-16 00:06:03.944676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.012 [2024-07-16 00:06:03.944684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.012 [2024-07-16 00:06:03.948275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.012 [2024-07-16 00:06:03.957368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.012 [2024-07-16 00:06:03.957828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.012 [2024-07-16 00:06:03.957849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.012 [2024-07-16 00:06:03.957857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.012 [2024-07-16 00:06:03.958075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.012 [2024-07-16 00:06:03.958300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.012 [2024-07-16 00:06:03.958310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.012 [2024-07-16 00:06:03.958317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.012 [2024-07-16 00:06:03.961819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.012 [2024-07-16 00:06:03.971117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.012 [2024-07-16 00:06:03.971735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.012 [2024-07-16 00:06:03.971752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.012 [2024-07-16 00:06:03.971760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.012 [2024-07-16 00:06:03.971976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.012 [2024-07-16 00:06:03.972193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.012 [2024-07-16 00:06:03.972202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.012 [2024-07-16 00:06:03.972209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.012 [2024-07-16 00:06:03.975721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.012 [2024-07-16 00:06:03.985018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.012 [2024-07-16 00:06:03.985602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.012 [2024-07-16 00:06:03.985619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.012 [2024-07-16 00:06:03.985627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.012 [2024-07-16 00:06:03.985844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.012 [2024-07-16 00:06:03.986061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.013 [2024-07-16 00:06:03.986071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.013 [2024-07-16 00:06:03.986078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.013 [2024-07-16 00:06:03.989586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.013 [2024-07-16 00:06:03.998891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.013 [2024-07-16 00:06:03.999466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.013 [2024-07-16 00:06:03.999484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.013 [2024-07-16 00:06:03.999491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.013 [2024-07-16 00:06:03.999709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.013 [2024-07-16 00:06:03.999926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.013 [2024-07-16 00:06:03.999935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.013 [2024-07-16 00:06:03.999942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.013 [2024-07-16 00:06:04.003447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.013 [2024-07-16 00:06:04.012741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.013 [2024-07-16 00:06:04.013341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.013 [2024-07-16 00:06:04.013358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.013 [2024-07-16 00:06:04.013365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.013 [2024-07-16 00:06:04.013582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.013 [2024-07-16 00:06:04.013798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.013 [2024-07-16 00:06:04.013807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.013 [2024-07-16 00:06:04.013814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.013 [2024-07-16 00:06:04.017327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.013 [2024-07-16 00:06:04.026621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.013 [2024-07-16 00:06:04.027272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.013 [2024-07-16 00:06:04.027310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.013 [2024-07-16 00:06:04.027327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.013 [2024-07-16 00:06:04.027567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.013 [2024-07-16 00:06:04.027788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.013 [2024-07-16 00:06:04.027797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.013 [2024-07-16 00:06:04.027805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.013 [2024-07-16 00:06:04.031324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.013 [2024-07-16 00:06:04.040422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.013 [2024-07-16 00:06:04.041146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.013 [2024-07-16 00:06:04.041184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.013 [2024-07-16 00:06:04.041195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.013 [2024-07-16 00:06:04.041442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.013 [2024-07-16 00:06:04.041663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.013 [2024-07-16 00:06:04.041673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.013 [2024-07-16 00:06:04.041680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.013 [2024-07-16 00:06:04.045185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.013 [2024-07-16 00:06:04.054322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.013 [2024-07-16 00:06:04.054978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.013 [2024-07-16 00:06:04.055017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.013 [2024-07-16 00:06:04.055028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.013 [2024-07-16 00:06:04.055273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.013 [2024-07-16 00:06:04.055495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.013 [2024-07-16 00:06:04.055504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.013 [2024-07-16 00:06:04.055512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.013 [2024-07-16 00:06:04.059015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.013 [2024-07-16 00:06:04.068111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.013 [2024-07-16 00:06:04.068784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.013 [2024-07-16 00:06:04.068823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.013 [2024-07-16 00:06:04.068834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.013 [2024-07-16 00:06:04.069070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.013 [2024-07-16 00:06:04.069299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.013 [2024-07-16 00:06:04.069314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.013 [2024-07-16 00:06:04.069322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.013 [2024-07-16 00:06:04.072831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.013 [2024-07-16 00:06:04.081923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.013 [2024-07-16 00:06:04.082642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.013 [2024-07-16 00:06:04.082680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.013 [2024-07-16 00:06:04.082692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.013 [2024-07-16 00:06:04.082929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.013 [2024-07-16 00:06:04.083149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.013 [2024-07-16 00:06:04.083159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.013 [2024-07-16 00:06:04.083166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.013 [2024-07-16 00:06:04.086678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.013 [2024-07-16 00:06:04.095754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.013 [2024-07-16 00:06:04.096413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.013 [2024-07-16 00:06:04.096451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.013 [2024-07-16 00:06:04.096462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.013 [2024-07-16 00:06:04.096699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.013 [2024-07-16 00:06:04.096920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.013 [2024-07-16 00:06:04.096930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.013 [2024-07-16 00:06:04.096937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.013 [2024-07-16 00:06:04.100451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.013 [2024-07-16 00:06:04.109540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.013 [2024-07-16 00:06:04.110116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.014 [2024-07-16 00:06:04.110135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.014 [2024-07-16 00:06:04.110143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.014 [2024-07-16 00:06:04.110367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.014 [2024-07-16 00:06:04.110585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.014 [2024-07-16 00:06:04.110594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.014 [2024-07-16 00:06:04.110601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.014 [2024-07-16 00:06:04.114098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.014 [2024-07-16 00:06:04.123408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.014 [2024-07-16 00:06:04.124016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.014 [2024-07-16 00:06:04.124033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.014 [2024-07-16 00:06:04.124041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.014 [2024-07-16 00:06:04.124263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.014 [2024-07-16 00:06:04.124481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.014 [2024-07-16 00:06:04.124490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.014 [2024-07-16 00:06:04.124497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.014 [2024-07-16 00:06:04.127996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.014 [2024-07-16 00:06:04.137432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.014 [2024-07-16 00:06:04.138042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.014 [2024-07-16 00:06:04.138059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.014 [2024-07-16 00:06:04.138067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.014 [2024-07-16 00:06:04.138291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.014 [2024-07-16 00:06:04.138510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.014 [2024-07-16 00:06:04.138519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.014 [2024-07-16 00:06:04.138526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.014 [2024-07-16 00:06:04.142028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.014 [2024-07-16 00:06:04.151322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.014 [2024-07-16 00:06:04.151893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.014 [2024-07-16 00:06:04.151910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.014 [2024-07-16 00:06:04.151919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.014 [2024-07-16 00:06:04.152136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.014 [2024-07-16 00:06:04.152358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.014 [2024-07-16 00:06:04.152367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.014 [2024-07-16 00:06:04.152374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.014 [2024-07-16 00:06:04.155871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.014 [2024-07-16 00:06:04.165166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.014 [2024-07-16 00:06:04.165770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.014 [2024-07-16 00:06:04.165786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.014 [2024-07-16 00:06:04.165794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.014 [2024-07-16 00:06:04.166014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.014 [2024-07-16 00:06:04.166237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.014 [2024-07-16 00:06:04.166246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.014 [2024-07-16 00:06:04.166253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.014 [2024-07-16 00:06:04.169756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.014 [2024-07-16 00:06:04.179057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.014 [2024-07-16 00:06:04.179758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.014 [2024-07-16 00:06:04.179797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.014 [2024-07-16 00:06:04.179808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.014 [2024-07-16 00:06:04.180045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.014 [2024-07-16 00:06:04.180275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.014 [2024-07-16 00:06:04.180286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.014 [2024-07-16 00:06:04.180294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.014 [2024-07-16 00:06:04.183801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.014 [2024-07-16 00:06:04.192897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.014 [2024-07-16 00:06:04.193464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.014 [2024-07-16 00:06:04.193484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.014 [2024-07-16 00:06:04.193492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.014 [2024-07-16 00:06:04.193709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.014 [2024-07-16 00:06:04.193926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.014 [2024-07-16 00:06:04.193934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.014 [2024-07-16 00:06:04.193942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.014 [2024-07-16 00:06:04.197451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.276 [2024-07-16 00:06:04.206771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.276 [2024-07-16 00:06:04.207478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.276 [2024-07-16 00:06:04.207516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.276 [2024-07-16 00:06:04.207529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.276 [2024-07-16 00:06:04.207768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.276 [2024-07-16 00:06:04.208001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.276 [2024-07-16 00:06:04.208011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.276 [2024-07-16 00:06:04.208024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.276 [2024-07-16 00:06:04.211541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.276 [2024-07-16 00:06:04.220648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.276 [2024-07-16 00:06:04.221316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.276 [2024-07-16 00:06:04.221354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.276 [2024-07-16 00:06:04.221367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.276 [2024-07-16 00:06:04.221607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.276 [2024-07-16 00:06:04.221828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.276 [2024-07-16 00:06:04.221837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.276 [2024-07-16 00:06:04.221844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.276 [2024-07-16 00:06:04.225361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.276 [2024-07-16 00:06:04.234447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.276 [2024-07-16 00:06:04.235159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.276 [2024-07-16 00:06:04.235198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.277 [2024-07-16 00:06:04.235209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.277 [2024-07-16 00:06:04.235453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.277 [2024-07-16 00:06:04.235675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.277 [2024-07-16 00:06:04.235684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.277 [2024-07-16 00:06:04.235692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.277 [2024-07-16 00:06:04.239194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.277 [2024-07-16 00:06:04.248305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.277 [2024-07-16 00:06:04.248989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.277 [2024-07-16 00:06:04.249027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.277 [2024-07-16 00:06:04.249038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.277 [2024-07-16 00:06:04.249284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.277 [2024-07-16 00:06:04.249505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.277 [2024-07-16 00:06:04.249515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.277 [2024-07-16 00:06:04.249523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.277 [2024-07-16 00:06:04.253027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.277 [2024-07-16 00:06:04.262114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.277 [2024-07-16 00:06:04.262694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.277 [2024-07-16 00:06:04.262718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.277 [2024-07-16 00:06:04.262727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.277 [2024-07-16 00:06:04.262944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.277 [2024-07-16 00:06:04.263161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.277 [2024-07-16 00:06:04.263170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.277 [2024-07-16 00:06:04.263177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.277 [2024-07-16 00:06:04.266677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.277 [2024-07-16 00:06:04.275966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.277 [2024-07-16 00:06:04.276660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.277 [2024-07-16 00:06:04.276698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.277 [2024-07-16 00:06:04.276709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.277 [2024-07-16 00:06:04.276947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.277 [2024-07-16 00:06:04.277167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.277 [2024-07-16 00:06:04.277177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.277 [2024-07-16 00:06:04.277184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.277 [2024-07-16 00:06:04.280694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.277 [2024-07-16 00:06:04.289782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.277 [2024-07-16 00:06:04.290481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.277 [2024-07-16 00:06:04.290520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.277 [2024-07-16 00:06:04.290531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.277 [2024-07-16 00:06:04.290768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.277 [2024-07-16 00:06:04.290989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.277 [2024-07-16 00:06:04.290999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.277 [2024-07-16 00:06:04.291007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.277 [2024-07-16 00:06:04.294517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.277 [2024-07-16 00:06:04.303603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.277 [2024-07-16 00:06:04.304310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.277 [2024-07-16 00:06:04.304349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.277 [2024-07-16 00:06:04.304361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.277 [2024-07-16 00:06:04.304600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.277 [2024-07-16 00:06:04.304825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.277 [2024-07-16 00:06:04.304834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.277 [2024-07-16 00:06:04.304842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.277 [2024-07-16 00:06:04.308358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.277 [2024-07-16 00:06:04.317453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.277 [2024-07-16 00:06:04.318036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.277 [2024-07-16 00:06:04.318072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.277 [2024-07-16 00:06:04.318083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.277 [2024-07-16 00:06:04.318326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.277 [2024-07-16 00:06:04.318547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.277 [2024-07-16 00:06:04.318557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.277 [2024-07-16 00:06:04.318565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.277 [2024-07-16 00:06:04.322067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.277 [2024-07-16 00:06:04.331360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.277 [2024-07-16 00:06:04.331983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.277 [2024-07-16 00:06:04.332001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.277 [2024-07-16 00:06:04.332009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.277 [2024-07-16 00:06:04.332226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.277 [2024-07-16 00:06:04.332450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.277 [2024-07-16 00:06:04.332459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.277 [2024-07-16 00:06:04.332466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.277 [2024-07-16 00:06:04.335964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.277 [2024-07-16 00:06:04.345254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.277 [2024-07-16 00:06:04.345951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.277 [2024-07-16 00:06:04.345990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.277 [2024-07-16 00:06:04.346001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.277 [2024-07-16 00:06:04.346247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.277 [2024-07-16 00:06:04.346469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.277 [2024-07-16 00:06:04.346479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.277 [2024-07-16 00:06:04.346486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.277 [2024-07-16 00:06:04.349993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.277 [2024-07-16 00:06:04.359083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.278 [2024-07-16 00:06:04.359686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.278 [2024-07-16 00:06:04.359705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.278 [2024-07-16 00:06:04.359714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.278 [2024-07-16 00:06:04.359931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.278 [2024-07-16 00:06:04.360149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.278 [2024-07-16 00:06:04.360157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.278 [2024-07-16 00:06:04.360165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.278 [2024-07-16 00:06:04.363670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.278 [2024-07-16 00:06:04.372955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.278 [2024-07-16 00:06:04.373446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.278 [2024-07-16 00:06:04.373464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.278 [2024-07-16 00:06:04.373471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.278 [2024-07-16 00:06:04.373688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.278 [2024-07-16 00:06:04.373904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.278 [2024-07-16 00:06:04.373913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.278 [2024-07-16 00:06:04.373920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.278 [2024-07-16 00:06:04.377419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.278 [2024-07-16 00:06:04.386701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.278 [2024-07-16 00:06:04.387347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.278 [2024-07-16 00:06:04.387386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.278 [2024-07-16 00:06:04.387399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.278 [2024-07-16 00:06:04.387637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.278 [2024-07-16 00:06:04.387858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.278 [2024-07-16 00:06:04.387868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.278 [2024-07-16 00:06:04.387876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.278 [2024-07-16 00:06:04.391389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.278 [2024-07-16 00:06:04.400476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.278 [2024-07-16 00:06:04.401139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.278 [2024-07-16 00:06:04.401178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.278 [2024-07-16 00:06:04.401195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.278 [2024-07-16 00:06:04.401440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.278 [2024-07-16 00:06:04.401662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.278 [2024-07-16 00:06:04.401671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.278 [2024-07-16 00:06:04.401679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.278 [2024-07-16 00:06:04.405182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.278 [2024-07-16 00:06:04.414275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.278 [2024-07-16 00:06:04.414870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.278 [2024-07-16 00:06:04.414889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.278 [2024-07-16 00:06:04.414897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.278 [2024-07-16 00:06:04.415114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.278 [2024-07-16 00:06:04.415338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.278 [2024-07-16 00:06:04.415348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.278 [2024-07-16 00:06:04.415355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.278 [2024-07-16 00:06:04.418866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.278 [2024-07-16 00:06:04.428153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.278 [2024-07-16 00:06:04.428827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.278 [2024-07-16 00:06:04.428866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.278 [2024-07-16 00:06:04.428877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.278 [2024-07-16 00:06:04.429114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.278 [2024-07-16 00:06:04.429343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.278 [2024-07-16 00:06:04.429353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.278 [2024-07-16 00:06:04.429361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.278 [2024-07-16 00:06:04.432864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.278 [2024-07-16 00:06:04.441954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.278 [2024-07-16 00:06:04.442622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.278 [2024-07-16 00:06:04.442660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.278 [2024-07-16 00:06:04.442671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.278 [2024-07-16 00:06:04.442907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.278 [2024-07-16 00:06:04.443128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.278 [2024-07-16 00:06:04.443142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.278 [2024-07-16 00:06:04.443150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.278 [2024-07-16 00:06:04.446663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.278 [2024-07-16 00:06:04.455747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.278 [2024-07-16 00:06:04.456202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.278 [2024-07-16 00:06:04.456224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.278 [2024-07-16 00:06:04.456241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.278 [2024-07-16 00:06:04.456462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.278 [2024-07-16 00:06:04.456680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.278 [2024-07-16 00:06:04.456689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.278 [2024-07-16 00:06:04.456696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.278 [2024-07-16 00:06:04.460196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.540 [2024-07-16 00:06:04.469691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.540 [2024-07-16 00:06:04.470446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.540 [2024-07-16 00:06:04.470484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.540 [2024-07-16 00:06:04.470495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.540 [2024-07-16 00:06:04.470732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.540 [2024-07-16 00:06:04.470953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.540 [2024-07-16 00:06:04.470962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.540 [2024-07-16 00:06:04.470970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.540 [2024-07-16 00:06:04.474479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.540 [2024-07-16 00:06:04.483580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.540 [2024-07-16 00:06:04.484197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.540 [2024-07-16 00:06:04.484215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.540 [2024-07-16 00:06:04.484223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.540 [2024-07-16 00:06:04.484445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.540 [2024-07-16 00:06:04.484663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.541 [2024-07-16 00:06:04.484672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.541 [2024-07-16 00:06:04.484679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.541 [2024-07-16 00:06:04.488172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.541 [2024-07-16 00:06:04.497469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.541 [2024-07-16 00:06:04.498073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.541 [2024-07-16 00:06:04.498090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.541 [2024-07-16 00:06:04.498097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.541 [2024-07-16 00:06:04.498320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.541 [2024-07-16 00:06:04.498538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.541 [2024-07-16 00:06:04.498548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.541 [2024-07-16 00:06:04.498555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.541 [2024-07-16 00:06:04.502050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.541 [2024-07-16 00:06:04.511338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.541 [2024-07-16 00:06:04.512002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.541 [2024-07-16 00:06:04.512040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.541 [2024-07-16 00:06:04.512051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.541 [2024-07-16 00:06:04.512300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.541 [2024-07-16 00:06:04.512521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.541 [2024-07-16 00:06:04.512531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.541 [2024-07-16 00:06:04.512539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.541 [2024-07-16 00:06:04.516041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.541 [2024-07-16 00:06:04.525139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.541 [2024-07-16 00:06:04.525720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.541 [2024-07-16 00:06:04.525758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.541 [2024-07-16 00:06:04.525769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.541 [2024-07-16 00:06:04.526005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.541 [2024-07-16 00:06:04.526225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.541 [2024-07-16 00:06:04.526243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.541 [2024-07-16 00:06:04.526251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.541 [2024-07-16 00:06:04.529752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.541 [2024-07-16 00:06:04.539047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.541 [2024-07-16 00:06:04.539678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.541 [2024-07-16 00:06:04.539696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.541 [2024-07-16 00:06:04.539705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.541 [2024-07-16 00:06:04.539930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.541 [2024-07-16 00:06:04.540147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.541 [2024-07-16 00:06:04.540156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.541 [2024-07-16 00:06:04.540164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.541 [2024-07-16 00:06:04.543669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.541 [2024-07-16 00:06:04.552954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.541 [2024-07-16 00:06:04.553423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.541 [2024-07-16 00:06:04.553461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.541 [2024-07-16 00:06:04.553473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.541 [2024-07-16 00:06:04.553711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.541 [2024-07-16 00:06:04.553932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.541 [2024-07-16 00:06:04.553941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.541 [2024-07-16 00:06:04.553949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.541 [2024-07-16 00:06:04.557461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.541 [2024-07-16 00:06:04.566754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.541 [2024-07-16 00:06:04.567336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.541 [2024-07-16 00:06:04.567375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.541 [2024-07-16 00:06:04.567388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.541 [2024-07-16 00:06:04.567628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.541 [2024-07-16 00:06:04.567849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.541 [2024-07-16 00:06:04.567858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.541 [2024-07-16 00:06:04.567866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.541 [2024-07-16 00:06:04.571379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.541 [2024-07-16 00:06:04.580680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.541 [2024-07-16 00:06:04.581365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.541 [2024-07-16 00:06:04.581403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.541 [2024-07-16 00:06:04.581415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.541 [2024-07-16 00:06:04.581651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.541 [2024-07-16 00:06:04.581873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.541 [2024-07-16 00:06:04.581882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.541 [2024-07-16 00:06:04.581894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.541 [2024-07-16 00:06:04.585408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.541 [2024-07-16 00:06:04.594495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.541 [2024-07-16 00:06:04.595033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.541 [2024-07-16 00:06:04.595052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.541 [2024-07-16 00:06:04.595060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.541 [2024-07-16 00:06:04.595283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.541 [2024-07-16 00:06:04.595501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.541 [2024-07-16 00:06:04.595510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.541 [2024-07-16 00:06:04.595517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.541 [2024-07-16 00:06:04.599021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.542 [2024-07-16 00:06:04.608309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.542 [2024-07-16 00:06:04.608906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.542 [2024-07-16 00:06:04.608922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.542 [2024-07-16 00:06:04.608930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.542 [2024-07-16 00:06:04.609147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.542 [2024-07-16 00:06:04.609368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.542 [2024-07-16 00:06:04.609378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.542 [2024-07-16 00:06:04.609385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.542 [2024-07-16 00:06:04.612884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.542 [2024-07-16 00:06:04.622187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.542 [2024-07-16 00:06:04.622897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.542 [2024-07-16 00:06:04.622937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.542 [2024-07-16 00:06:04.622950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.542 [2024-07-16 00:06:04.623188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.542 [2024-07-16 00:06:04.623418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.542 [2024-07-16 00:06:04.623429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.542 [2024-07-16 00:06:04.623436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.542 [2024-07-16 00:06:04.626940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.542 [2024-07-16 00:06:04.636029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.542 [2024-07-16 00:06:04.636713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.542 [2024-07-16 00:06:04.636751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.542 [2024-07-16 00:06:04.636762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.542 [2024-07-16 00:06:04.636999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.542 [2024-07-16 00:06:04.637219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.542 [2024-07-16 00:06:04.637228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.542 [2024-07-16 00:06:04.637244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.542 [2024-07-16 00:06:04.640748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.542 [2024-07-16 00:06:04.649836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.542 [2024-07-16 00:06:04.650541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.542 [2024-07-16 00:06:04.650580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.542 [2024-07-16 00:06:04.650591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.542 [2024-07-16 00:06:04.650827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.542 [2024-07-16 00:06:04.651048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.542 [2024-07-16 00:06:04.651058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.542 [2024-07-16 00:06:04.651066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.542 [2024-07-16 00:06:04.654574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.542 [2024-07-16 00:06:04.663661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.542 [2024-07-16 00:06:04.664280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.542 [2024-07-16 00:06:04.664300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.542 [2024-07-16 00:06:04.664309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.542 [2024-07-16 00:06:04.664527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.542 [2024-07-16 00:06:04.664743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.542 [2024-07-16 00:06:04.664753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.542 [2024-07-16 00:06:04.664760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.542 [2024-07-16 00:06:04.668263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.542 [2024-07-16 00:06:04.677554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.542 [2024-07-16 00:06:04.678241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.542 [2024-07-16 00:06:04.678279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.542 [2024-07-16 00:06:04.678292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.542 [2024-07-16 00:06:04.678532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.542 [2024-07-16 00:06:04.678757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.542 [2024-07-16 00:06:04.678766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.542 [2024-07-16 00:06:04.678774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.542 [2024-07-16 00:06:04.682280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.542 [2024-07-16 00:06:04.691363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.542 [2024-07-16 00:06:04.691973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.542 [2024-07-16 00:06:04.691992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.542 [2024-07-16 00:06:04.692000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.542 [2024-07-16 00:06:04.692216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.542 [2024-07-16 00:06:04.692440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.542 [2024-07-16 00:06:04.692450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.542 [2024-07-16 00:06:04.692457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.542 [2024-07-16 00:06:04.695954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.542 [2024-07-16 00:06:04.705246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.542 [2024-07-16 00:06:04.705786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.542 [2024-07-16 00:06:04.705824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.542 [2024-07-16 00:06:04.705835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.542 [2024-07-16 00:06:04.706072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.542 [2024-07-16 00:06:04.706301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.542 [2024-07-16 00:06:04.706312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.542 [2024-07-16 00:06:04.706320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.542 [2024-07-16 00:06:04.709824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.542 [2024-07-16 00:06:04.719128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.542 [2024-07-16 00:06:04.719824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.542 [2024-07-16 00:06:04.719863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.542 [2024-07-16 00:06:04.719874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.542 [2024-07-16 00:06:04.720111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.542 [2024-07-16 00:06:04.720345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.543 [2024-07-16 00:06:04.720355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.543 [2024-07-16 00:06:04.720363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.543 [2024-07-16 00:06:04.723872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.806 [2024-07-16 00:06:04.732960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.806 [2024-07-16 00:06:04.733647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.806 [2024-07-16 00:06:04.733685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.806 [2024-07-16 00:06:04.733696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.806 [2024-07-16 00:06:04.733933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.806 [2024-07-16 00:06:04.734153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.806 [2024-07-16 00:06:04.734163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.806 [2024-07-16 00:06:04.734170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.806 [2024-07-16 00:06:04.737682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.806 [2024-07-16 00:06:04.746768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.806 [2024-07-16 00:06:04.747385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.806 [2024-07-16 00:06:04.747423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.806 [2024-07-16 00:06:04.747434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.806 [2024-07-16 00:06:04.747671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.806 [2024-07-16 00:06:04.747891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.806 [2024-07-16 00:06:04.747901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.806 [2024-07-16 00:06:04.747909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.806 [2024-07-16 00:06:04.751420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.806 [2024-07-16 00:06:04.760549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.806 [2024-07-16 00:06:04.761062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.806 [2024-07-16 00:06:04.761100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.806 [2024-07-16 00:06:04.761111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.806 [2024-07-16 00:06:04.761357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.806 [2024-07-16 00:06:04.761578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.806 [2024-07-16 00:06:04.761588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.806 [2024-07-16 00:06:04.761595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.806 [2024-07-16 00:06:04.765099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.806 [2024-07-16 00:06:04.774396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.806 [2024-07-16 00:06:04.775108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.806 [2024-07-16 00:06:04.775146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.806 [2024-07-16 00:06:04.775162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.806 [2024-07-16 00:06:04.775407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.806 [2024-07-16 00:06:04.775629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.806 [2024-07-16 00:06:04.775638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.806 [2024-07-16 00:06:04.775646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.806 [2024-07-16 00:06:04.779147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.806 [2024-07-16 00:06:04.788234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.806 [2024-07-16 00:06:04.788802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.806 [2024-07-16 00:06:04.788821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.806 [2024-07-16 00:06:04.788829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.806 [2024-07-16 00:06:04.789046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.806 [2024-07-16 00:06:04.789269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.806 [2024-07-16 00:06:04.789278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.806 [2024-07-16 00:06:04.789284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.807 [2024-07-16 00:06:04.792781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.807 [2024-07-16 00:06:04.802071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.807 [2024-07-16 00:06:04.802689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.807 [2024-07-16 00:06:04.802707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.807 [2024-07-16 00:06:04.802714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.807 [2024-07-16 00:06:04.802930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.807 [2024-07-16 00:06:04.803148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.807 [2024-07-16 00:06:04.803156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.807 [2024-07-16 00:06:04.803163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.807 [2024-07-16 00:06:04.806666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.807 [2024-07-16 00:06:04.815952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.807 [2024-07-16 00:06:04.816616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.807 [2024-07-16 00:06:04.816654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.807 [2024-07-16 00:06:04.816665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.807 [2024-07-16 00:06:04.816902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.807 [2024-07-16 00:06:04.817124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.807 [2024-07-16 00:06:04.817139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.807 [2024-07-16 00:06:04.817147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.807 [2024-07-16 00:06:04.820668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.807 [2024-07-16 00:06:04.829758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.807 [2024-07-16 00:06:04.830376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.807 [2024-07-16 00:06:04.830396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.807 [2024-07-16 00:06:04.830404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.807 [2024-07-16 00:06:04.830622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.807 [2024-07-16 00:06:04.830839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.807 [2024-07-16 00:06:04.830848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.807 [2024-07-16 00:06:04.830856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.807 [2024-07-16 00:06:04.834364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.807 [2024-07-16 00:06:04.843652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.807 [2024-07-16 00:06:04.844236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.807 [2024-07-16 00:06:04.844253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.807 [2024-07-16 00:06:04.844261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.807 [2024-07-16 00:06:04.844478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.807 [2024-07-16 00:06:04.844695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.807 [2024-07-16 00:06:04.844704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.807 [2024-07-16 00:06:04.844711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.807 [2024-07-16 00:06:04.848204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.807 [2024-07-16 00:06:04.857489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.807 [2024-07-16 00:06:04.858090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.807 [2024-07-16 00:06:04.858106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.807 [2024-07-16 00:06:04.858114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.807 [2024-07-16 00:06:04.858335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.807 [2024-07-16 00:06:04.858553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.807 [2024-07-16 00:06:04.858561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.807 [2024-07-16 00:06:04.858569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.807 [2024-07-16 00:06:04.862062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 655560 Killed "${NVMF_APP[@]}" "$@" 00:29:49.807 [2024-07-16 00:06:04.871352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.807 [2024-07-16 00:06:04.871990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.807 [2024-07-16 00:06:04.872028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.807 [2024-07-16 00:06:04.872039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:49.807 [2024-07-16 00:06:04.872286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:49.807 [2024-07-16 00:06:04.872507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.807 [2024-07-16 00:06:04.872517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.807 [2024-07-16 00:06:04.872525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.807 [2024-07-16 00:06:04.876029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=657370 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 657370 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 657370 ']' 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:49.807 00:06:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.807 [2024-07-16 00:06:04.885118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.807 [2024-07-16 00:06:04.885837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.807 [2024-07-16 00:06:04.885877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.807 [2024-07-16 00:06:04.885888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.807 [2024-07-16 00:06:04.886126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.807 [2024-07-16 00:06:04.886356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.807 [2024-07-16 00:06:04.886366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.807 [2024-07-16 00:06:04.886374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.807 [2024-07-16 00:06:04.889880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.807 [2024-07-16 00:06:04.898986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.807 [2024-07-16 00:06:04.899610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.808 [2024-07-16 00:06:04.899649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.808 [2024-07-16 00:06:04.899660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.808 [2024-07-16 00:06:04.899896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.808 [2024-07-16 00:06:04.900117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.808 [2024-07-16 00:06:04.900128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.808 [2024-07-16 00:06:04.900136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.808 [2024-07-16 00:06:04.903651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.808 [2024-07-16 00:06:04.912745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.808 [2024-07-16 00:06:04.913441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.808 [2024-07-16 00:06:04.913479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.808 [2024-07-16 00:06:04.913491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.808 [2024-07-16 00:06:04.913727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.808 [2024-07-16 00:06:04.913948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.808 [2024-07-16 00:06:04.913958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.808 [2024-07-16 00:06:04.913965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.808 [2024-07-16 00:06:04.917489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.808 [2024-07-16 00:06:04.926584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.808 [2024-07-16 00:06:04.927300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.808 [2024-07-16 00:06:04.927338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.808 [2024-07-16 00:06:04.927351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.808 [2024-07-16 00:06:04.927589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.808 [2024-07-16 00:06:04.927810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.808 [2024-07-16 00:06:04.927820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.808 [2024-07-16 00:06:04.927827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.808 [2024-07-16 00:06:04.931349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.808 [2024-07-16 00:06:04.932631] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:29:49.808 [2024-07-16 00:06:04.932679] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.808 [2024-07-16 00:06:04.940441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.808 [2024-07-16 00:06:04.941149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.808 [2024-07-16 00:06:04.941193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.808 [2024-07-16 00:06:04.941206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.808 [2024-07-16 00:06:04.941454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.808 [2024-07-16 00:06:04.941676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.808 [2024-07-16 00:06:04.941686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.808 [2024-07-16 00:06:04.941694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.808 [2024-07-16 00:06:04.945199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.808 [2024-07-16 00:06:04.954297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.808 [2024-07-16 00:06:04.954899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.808 [2024-07-16 00:06:04.954937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.808 [2024-07-16 00:06:04.954949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.808 [2024-07-16 00:06:04.955186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.808 [2024-07-16 00:06:04.955415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.808 [2024-07-16 00:06:04.955426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.808 [2024-07-16 00:06:04.955435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.808 [2024-07-16 00:06:04.958939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.808 [2024-07-16 00:06:04.968239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.808 [2024-07-16 00:06:04.968911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.808 [2024-07-16 00:06:04.968949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.808 [2024-07-16 00:06:04.968961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.808 [2024-07-16 00:06:04.969199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.808 [2024-07-16 00:06:04.969429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.808 [2024-07-16 00:06:04.969440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.808 [2024-07-16 00:06:04.969448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.808 [2024-07-16 00:06:04.972957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.808 [2024-07-16 00:06:04.982140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.808 [2024-07-16 00:06:04.982822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.808 [2024-07-16 00:06:04.982861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:49.808 [2024-07-16 00:06:04.982873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:49.808 [2024-07-16 00:06:04.983110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:49.808 [2024-07-16 00:06:04.983343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.808 [2024-07-16 00:06:04.983355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.808 [2024-07-16 00:06:04.983362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.808 [2024-07-16 00:06:04.986867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.071 [2024-07-16 00:06:04.995959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.071 [2024-07-16 00:06:04.996518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.071 [2024-07-16 00:06:04.996557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.071 [2024-07-16 00:06:04.996570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.071 [2024-07-16 00:06:04.996810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.071 [2024-07-16 00:06:04.997030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.071 [2024-07-16 00:06:04.997040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.071 [2024-07-16 00:06:04.997049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.071 [2024-07-16 00:06:05.000564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.071 [2024-07-16 00:06:05.009902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.071 [2024-07-16 00:06:05.010584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.071 [2024-07-16 00:06:05.010623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.071 [2024-07-16 00:06:05.010635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.071 [2024-07-16 00:06:05.010872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.071 [2024-07-16 00:06:05.011093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.071 [2024-07-16 00:06:05.011103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.071 [2024-07-16 00:06:05.011111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.071 [2024-07-16 00:06:05.014627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.071 [2024-07-16 00:06:05.019964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:50.071 [2024-07-16 00:06:05.023730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.071 [2024-07-16 00:06:05.024274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.071 [2024-07-16 00:06:05.024313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.071 [2024-07-16 00:06:05.024324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.071 [2024-07-16 00:06:05.024562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.071 [2024-07-16 00:06:05.024783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.071 [2024-07-16 00:06:05.024793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.071 [2024-07-16 00:06:05.024805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.071 [2024-07-16 00:06:05.028323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.071 [2024-07-16 00:06:05.037623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.071 [2024-07-16 00:06:05.038306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.071 [2024-07-16 00:06:05.038344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.071 [2024-07-16 00:06:05.038357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.071 [2024-07-16 00:06:05.038595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.071 [2024-07-16 00:06:05.038817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.071 [2024-07-16 00:06:05.038826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.071 [2024-07-16 00:06:05.038833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.071 [2024-07-16 00:06:05.042359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.071 [2024-07-16 00:06:05.051456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.071 [2024-07-16 00:06:05.052055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.071 [2024-07-16 00:06:05.052075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.071 [2024-07-16 00:06:05.052084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.071 [2024-07-16 00:06:05.052308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.071 [2024-07-16 00:06:05.052527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.071 [2024-07-16 00:06:05.052535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.071 [2024-07-16 00:06:05.052543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.072 [2024-07-16 00:06:05.056042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.072 [2024-07-16 00:06:05.065340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.072 [2024-07-16 00:06:05.066061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.072 [2024-07-16 00:06:05.066101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.072 [2024-07-16 00:06:05.066112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.072 [2024-07-16 00:06:05.066360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.072 [2024-07-16 00:06:05.066582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.072 [2024-07-16 00:06:05.066591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.072 [2024-07-16 00:06:05.066600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.072 [2024-07-16 00:06:05.070107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.072 [2024-07-16 00:06:05.073626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.072 [2024-07-16 00:06:05.073653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.072 [2024-07-16 00:06:05.073663] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.072 [2024-07-16 00:06:05.073667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.072 [2024-07-16 00:06:05.073672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.072 [2024-07-16 00:06:05.073772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.072 [2024-07-16 00:06:05.073932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.072 [2024-07-16 00:06:05.073934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.072 [2024-07-16 00:06:05.079157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.072 [2024-07-16 00:06:05.079800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.072 [2024-07-16 00:06:05.079840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.072 [2024-07-16 00:06:05.079851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.072 [2024-07-16 00:06:05.080091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.072 [2024-07-16 00:06:05.080320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.072 [2024-07-16 00:06:05.080330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.072 [2024-07-16 00:06:05.080338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.072 [2024-07-16 00:06:05.083843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.072 [2024-07-16 00:06:05.092934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.072 [2024-07-16 00:06:05.093564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.072 [2024-07-16 00:06:05.093605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.072 [2024-07-16 00:06:05.093616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.072 [2024-07-16 00:06:05.093855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.072 [2024-07-16 00:06:05.094076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.072 [2024-07-16 00:06:05.094085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.072 [2024-07-16 00:06:05.094094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.072 [2024-07-16 00:06:05.097606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.072 [2024-07-16 00:06:05.106691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.072 [2024-07-16 00:06:05.107455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.072 [2024-07-16 00:06:05.107494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.072 [2024-07-16 00:06:05.107506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.072 [2024-07-16 00:06:05.107744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.072 [2024-07-16 00:06:05.107965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.072 [2024-07-16 00:06:05.107974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.072 [2024-07-16 00:06:05.107987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.072 [2024-07-16 00:06:05.111500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.072 [2024-07-16 00:06:05.120602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.072 [2024-07-16 00:06:05.121331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.072 [2024-07-16 00:06:05.121370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.072 [2024-07-16 00:06:05.121382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.072 [2024-07-16 00:06:05.121622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.072 [2024-07-16 00:06:05.121843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.072 [2024-07-16 00:06:05.121852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.072 [2024-07-16 00:06:05.121860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.072 [2024-07-16 00:06:05.125372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.072 [2024-07-16 00:06:05.134455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.072 [2024-07-16 00:06:05.135010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.072 [2024-07-16 00:06:05.135048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.072 [2024-07-16 00:06:05.135059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.072 [2024-07-16 00:06:05.135304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.072 [2024-07-16 00:06:05.135526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.072 [2024-07-16 00:06:05.135535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.072 [2024-07-16 00:06:05.135543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.072 [2024-07-16 00:06:05.139274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.072 [2024-07-16 00:06:05.148371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.072 [2024-07-16 00:06:05.149072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.072 [2024-07-16 00:06:05.149110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.072 [2024-07-16 00:06:05.149122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.072 [2024-07-16 00:06:05.149366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.072 [2024-07-16 00:06:05.149588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.072 [2024-07-16 00:06:05.149597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.072 [2024-07-16 00:06:05.149605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.072 [2024-07-16 00:06:05.153105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.072 [2024-07-16 00:06:05.162190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.072 [2024-07-16 00:06:05.162922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.072 [2024-07-16 00:06:05.162965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.072 [2024-07-16 00:06:05.162976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.072 [2024-07-16 00:06:05.163214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.072 [2024-07-16 00:06:05.163443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.072 [2024-07-16 00:06:05.163453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.072 [2024-07-16 00:06:05.163461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.073 [2024-07-16 00:06:05.166961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.073 [2024-07-16 00:06:05.176044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.073 [2024-07-16 00:06:05.176732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.073 [2024-07-16 00:06:05.176770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.073 [2024-07-16 00:06:05.176782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.073 [2024-07-16 00:06:05.177018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.073 [2024-07-16 00:06:05.177247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.073 [2024-07-16 00:06:05.177257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.073 [2024-07-16 00:06:05.177265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.073 [2024-07-16 00:06:05.180770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.073 [2024-07-16 00:06:05.189856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.073 [2024-07-16 00:06:05.190362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.073 [2024-07-16 00:06:05.190401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.073 [2024-07-16 00:06:05.190414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.073 [2024-07-16 00:06:05.190654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.073 [2024-07-16 00:06:05.190876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.073 [2024-07-16 00:06:05.190885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.073 [2024-07-16 00:06:05.190892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.073 [2024-07-16 00:06:05.194399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.073 [2024-07-16 00:06:05.203686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.073 [2024-07-16 00:06:05.204468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.073 [2024-07-16 00:06:05.204506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.073 [2024-07-16 00:06:05.204517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.073 [2024-07-16 00:06:05.204754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.073 [2024-07-16 00:06:05.204980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.073 [2024-07-16 00:06:05.204990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.073 [2024-07-16 00:06:05.204998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.073 [2024-07-16 00:06:05.208511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.073 [2024-07-16 00:06:05.217591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.073 [2024-07-16 00:06:05.218318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.073 [2024-07-16 00:06:05.218357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.073 [2024-07-16 00:06:05.218369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.073 [2024-07-16 00:06:05.218608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.073 [2024-07-16 00:06:05.218829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.073 [2024-07-16 00:06:05.218838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.073 [2024-07-16 00:06:05.218846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.073 [2024-07-16 00:06:05.222366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.073 [2024-07-16 00:06:05.231447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.073 [2024-07-16 00:06:05.231922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.073 [2024-07-16 00:06:05.231941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.073 [2024-07-16 00:06:05.231949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.073 [2024-07-16 00:06:05.232166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.073 [2024-07-16 00:06:05.232390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.073 [2024-07-16 00:06:05.232399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.073 [2024-07-16 00:06:05.232406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.073 [2024-07-16 00:06:05.235903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.073 [2024-07-16 00:06:05.245190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.073 [2024-07-16 00:06:05.245750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.073 [2024-07-16 00:06:05.245788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.073 [2024-07-16 00:06:05.245799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.073 [2024-07-16 00:06:05.246036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.073 [2024-07-16 00:06:05.246264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.073 [2024-07-16 00:06:05.246274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.073 [2024-07-16 00:06:05.246282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.073 [2024-07-16 00:06:05.249785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.073 [2024-07-16 00:06:05.259081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.073 [2024-07-16 00:06:05.259694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.073 [2024-07-16 00:06:05.259714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.073 [2024-07-16 00:06:05.259722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.336 [2024-07-16 00:06:05.259939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.336 [2024-07-16 00:06:05.260157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.336 [2024-07-16 00:06:05.260168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.336 [2024-07-16 00:06:05.260175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.336 [2024-07-16 00:06:05.263678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.336 [2024-07-16 00:06:05.273019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.336 [2024-07-16 00:06:05.273743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.336 [2024-07-16 00:06:05.273781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.336 [2024-07-16 00:06:05.273793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.336 [2024-07-16 00:06:05.274030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.336 [2024-07-16 00:06:05.274258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.336 [2024-07-16 00:06:05.274268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.336 [2024-07-16 00:06:05.274276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.336 [2024-07-16 00:06:05.277777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.336 [2024-07-16 00:06:05.286858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.336 [2024-07-16 00:06:05.287618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.336 [2024-07-16 00:06:05.287657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.336 [2024-07-16 00:06:05.287668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.336 [2024-07-16 00:06:05.287905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.336 [2024-07-16 00:06:05.288126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.336 [2024-07-16 00:06:05.288135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.336 [2024-07-16 00:06:05.288142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.336 [2024-07-16 00:06:05.291652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.336 [2024-07-16 00:06:05.300735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.336 [2024-07-16 00:06:05.301365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.336 [2024-07-16 00:06:05.301404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.336 [2024-07-16 00:06:05.301420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.336 [2024-07-16 00:06:05.301659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.336 [2024-07-16 00:06:05.301880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.336 [2024-07-16 00:06:05.301889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.336 [2024-07-16 00:06:05.301897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.336 [2024-07-16 00:06:05.305409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.336 [2024-07-16 00:06:05.314491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.336 [2024-07-16 00:06:05.315226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.336 [2024-07-16 00:06:05.315270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.336 [2024-07-16 00:06:05.315283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.336 [2024-07-16 00:06:05.315523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.336 [2024-07-16 00:06:05.315744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.336 [2024-07-16 00:06:05.315753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.336 [2024-07-16 00:06:05.315761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.336 [2024-07-16 00:06:05.319267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.337 [2024-07-16 00:06:05.328363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.337 [2024-07-16 00:06:05.329086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.337 [2024-07-16 00:06:05.329125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.337 [2024-07-16 00:06:05.329136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.337 [2024-07-16 00:06:05.329380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.337 [2024-07-16 00:06:05.329602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.337 [2024-07-16 00:06:05.329611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.337 [2024-07-16 00:06:05.329619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.337 [2024-07-16 00:06:05.333120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.337 [2024-07-16 00:06:05.342204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.337 [2024-07-16 00:06:05.342915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.337 [2024-07-16 00:06:05.342953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.337 [2024-07-16 00:06:05.342964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.337 [2024-07-16 00:06:05.343201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.337 [2024-07-16 00:06:05.343430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.337 [2024-07-16 00:06:05.343444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.337 [2024-07-16 00:06:05.343452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.337 [2024-07-16 00:06:05.346962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.337 [2024-07-16 00:06:05.356048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.337 [2024-07-16 00:06:05.356742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.337 [2024-07-16 00:06:05.356780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.337 [2024-07-16 00:06:05.356792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.337 [2024-07-16 00:06:05.357028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.337 [2024-07-16 00:06:05.357257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.337 [2024-07-16 00:06:05.357267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.337 [2024-07-16 00:06:05.357275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.337 [2024-07-16 00:06:05.360779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.337 [2024-07-16 00:06:05.369861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.337 [2024-07-16 00:06:05.370487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.337 [2024-07-16 00:06:05.370525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.337 [2024-07-16 00:06:05.370536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.337 [2024-07-16 00:06:05.370773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.337 [2024-07-16 00:06:05.370994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.337 [2024-07-16 00:06:05.371003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.337 [2024-07-16 00:06:05.371011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.337 [2024-07-16 00:06:05.374521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.337 [2024-07-16 00:06:05.383604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.337 [2024-07-16 00:06:05.384237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.337 [2024-07-16 00:06:05.384257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.337 [2024-07-16 00:06:05.384265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.337 [2024-07-16 00:06:05.384482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.337 [2024-07-16 00:06:05.384700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.337 [2024-07-16 00:06:05.384708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.337 [2024-07-16 00:06:05.384715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.337 [2024-07-16 00:06:05.388210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.337 [2024-07-16 00:06:05.397497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.337 [2024-07-16 00:06:05.398075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.337 [2024-07-16 00:06:05.398114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.337 [2024-07-16 00:06:05.398125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.337 [2024-07-16 00:06:05.398369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.337 [2024-07-16 00:06:05.398591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.337 [2024-07-16 00:06:05.398600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.337 [2024-07-16 00:06:05.398607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.337 [2024-07-16 00:06:05.402109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.337 [2024-07-16 00:06:05.411402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.337 [2024-07-16 00:06:05.412081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.337 [2024-07-16 00:06:05.412119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.337 [2024-07-16 00:06:05.412130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.337 [2024-07-16 00:06:05.412375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.337 [2024-07-16 00:06:05.412597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.337 [2024-07-16 00:06:05.412606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.337 [2024-07-16 00:06:05.412614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.337 [2024-07-16 00:06:05.416113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.337 [2024-07-16 00:06:05.425204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.337 [2024-07-16 00:06:05.425943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.337 [2024-07-16 00:06:05.425982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.337 [2024-07-16 00:06:05.425992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.337 [2024-07-16 00:06:05.426236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.337 [2024-07-16 00:06:05.426457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.337 [2024-07-16 00:06:05.426468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.337 [2024-07-16 00:06:05.426476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.337 [2024-07-16 00:06:05.429978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.337 [2024-07-16 00:06:05.439062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.337 [2024-07-16 00:06:05.439796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.338 [2024-07-16 00:06:05.439835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.338 [2024-07-16 00:06:05.439845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.338 [2024-07-16 00:06:05.440086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.338 [2024-07-16 00:06:05.440314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.338 [2024-07-16 00:06:05.440324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.338 [2024-07-16 00:06:05.440332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.338 [2024-07-16 00:06:05.443838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.338 [2024-07-16 00:06:05.452924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.338 [2024-07-16 00:06:05.453644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.338 [2024-07-16 00:06:05.453682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.338 [2024-07-16 00:06:05.453693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.338 [2024-07-16 00:06:05.453930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.338 [2024-07-16 00:06:05.454151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.338 [2024-07-16 00:06:05.454160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.338 [2024-07-16 00:06:05.454168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.338 [2024-07-16 00:06:05.457680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.338 [2024-07-16 00:06:05.466761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.338 [2024-07-16 00:06:05.467524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.338 [2024-07-16 00:06:05.467563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.338 [2024-07-16 00:06:05.467574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.338 [2024-07-16 00:06:05.467811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.338 [2024-07-16 00:06:05.468032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.338 [2024-07-16 00:06:05.468041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.338 [2024-07-16 00:06:05.468049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.338 [2024-07-16 00:06:05.471556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.338 [2024-07-16 00:06:05.480634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.338 [2024-07-16 00:06:05.481097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.338 [2024-07-16 00:06:05.481117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.338 [2024-07-16 00:06:05.481125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.338 [2024-07-16 00:06:05.481349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.338 [2024-07-16 00:06:05.481567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.338 [2024-07-16 00:06:05.481576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.338 [2024-07-16 00:06:05.481587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.338 [2024-07-16 00:06:05.485083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.338 [2024-07-16 00:06:05.494574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.338 [2024-07-16 00:06:05.495280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.338 [2024-07-16 00:06:05.495319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.338 [2024-07-16 00:06:05.495332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.338 [2024-07-16 00:06:05.495570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.338 [2024-07-16 00:06:05.495791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.338 [2024-07-16 00:06:05.495800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.338 [2024-07-16 00:06:05.495808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.338 [2024-07-16 00:06:05.499320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.338 [2024-07-16 00:06:05.508406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.338 [2024-07-16 00:06:05.509126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.338 [2024-07-16 00:06:05.509165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.338 [2024-07-16 00:06:05.509176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.338 [2024-07-16 00:06:05.509421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.338 [2024-07-16 00:06:05.509642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.338 [2024-07-16 00:06:05.509651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.338 [2024-07-16 00:06:05.509659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.338 [2024-07-16 00:06:05.513158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.338 [2024-07-16 00:06:05.522253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.338 [2024-07-16 00:06:05.522946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.338 [2024-07-16 00:06:05.522984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.338 [2024-07-16 00:06:05.522995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.338 [2024-07-16 00:06:05.523240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.338 [2024-07-16 00:06:05.523461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.338 [2024-07-16 00:06:05.523471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.338 [2024-07-16 00:06:05.523478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.601 [2024-07-16 00:06:05.526980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.601 [2024-07-16 00:06:05.536066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.601 [2024-07-16 00:06:05.536755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.601 [2024-07-16 00:06:05.536797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.601 [2024-07-16 00:06:05.536808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.601 [2024-07-16 00:06:05.537046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.601 [2024-07-16 00:06:05.537273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.601 [2024-07-16 00:06:05.537283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.601 [2024-07-16 00:06:05.537291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.601 [2024-07-16 00:06:05.540796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.601 [2024-07-16 00:06:05.549888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.601 [2024-07-16 00:06:05.550572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.601 [2024-07-16 00:06:05.550611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.601 [2024-07-16 00:06:05.550622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.601 [2024-07-16 00:06:05.550859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.601 [2024-07-16 00:06:05.551080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.601 [2024-07-16 00:06:05.551090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.601 [2024-07-16 00:06:05.551098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.601 [2024-07-16 00:06:05.554616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.601 [2024-07-16 00:06:05.563709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.601 [2024-07-16 00:06:05.564524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.601 [2024-07-16 00:06:05.564563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.601 [2024-07-16 00:06:05.564574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.601 [2024-07-16 00:06:05.564811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.601 [2024-07-16 00:06:05.565032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.601 [2024-07-16 00:06:05.565041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.601 [2024-07-16 00:06:05.565049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.601 [2024-07-16 00:06:05.568561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.601 [2024-07-16 00:06:05.577646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.601 [2024-07-16 00:06:05.578217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.601 [2024-07-16 00:06:05.578262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.601 [2024-07-16 00:06:05.578275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.601 [2024-07-16 00:06:05.578514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.602 [2024-07-16 00:06:05.578739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.602 [2024-07-16 00:06:05.578749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.602 [2024-07-16 00:06:05.578757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.602 [2024-07-16 00:06:05.582263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.602 [2024-07-16 00:06:05.591555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.602 [2024-07-16 00:06:05.592196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.602 [2024-07-16 00:06:05.592243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.602 [2024-07-16 00:06:05.592255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.602 [2024-07-16 00:06:05.592492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.602 [2024-07-16 00:06:05.592713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.602 [2024-07-16 00:06:05.592723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.602 [2024-07-16 00:06:05.592730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.602 [2024-07-16 00:06:05.596239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.602 [2024-07-16 00:06:05.605325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.602 [2024-07-16 00:06:05.606059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.602 [2024-07-16 00:06:05.606098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.602 [2024-07-16 00:06:05.606109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.602 [2024-07-16 00:06:05.606354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.602 [2024-07-16 00:06:05.606576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.602 [2024-07-16 00:06:05.606585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.602 [2024-07-16 00:06:05.606593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.602 [2024-07-16 00:06:05.610096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.602 [2024-07-16 00:06:05.619181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.602 [2024-07-16 00:06:05.619754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.602 [2024-07-16 00:06:05.619793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.602 [2024-07-16 00:06:05.619804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.602 [2024-07-16 00:06:05.620041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.602 [2024-07-16 00:06:05.620277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.602 [2024-07-16 00:06:05.620287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.602 [2024-07-16 00:06:05.620295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.602 [2024-07-16 00:06:05.623800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.602 [2024-07-16 00:06:05.633100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.602 [2024-07-16 00:06:05.633679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.602 [2024-07-16 00:06:05.633716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.602 [2024-07-16 00:06:05.633727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.602 [2024-07-16 00:06:05.633965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.602 [2024-07-16 00:06:05.634186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.602 [2024-07-16 00:06:05.634196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.602 [2024-07-16 00:06:05.634203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.602 [2024-07-16 00:06:05.637716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.602 [2024-07-16 00:06:05.647007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.602 [2024-07-16 00:06:05.647735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.602 [2024-07-16 00:06:05.647774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.602 [2024-07-16 00:06:05.647786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.602 [2024-07-16 00:06:05.648023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.602 [2024-07-16 00:06:05.648251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.602 [2024-07-16 00:06:05.648262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.602 [2024-07-16 00:06:05.648270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.602 [2024-07-16 00:06:05.651774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.602 [2024-07-16 00:06:05.660865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.602 [2024-07-16 00:06:05.661578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.602 [2024-07-16 00:06:05.661616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.602 [2024-07-16 00:06:05.661628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.602 [2024-07-16 00:06:05.661865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.602 [2024-07-16 00:06:05.662086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.602 [2024-07-16 00:06:05.662096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.602 [2024-07-16 00:06:05.662103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.602 [2024-07-16 00:06:05.665615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.602 [2024-07-16 00:06:05.674704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.602 [2024-07-16 00:06:05.675297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.602 [2024-07-16 00:06:05.675336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.602 [2024-07-16 00:06:05.675352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.602 [2024-07-16 00:06:05.675593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.602 [2024-07-16 00:06:05.675814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.602 [2024-07-16 00:06:05.675823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.602 [2024-07-16 00:06:05.675830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.602 [2024-07-16 00:06:05.679346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.602 [2024-07-16 00:06:05.688640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.602 [2024-07-16 00:06:05.689222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.602 [2024-07-16 00:06:05.689247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.602 [2024-07-16 00:06:05.689255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.603 [2024-07-16 00:06:05.689472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.603 [2024-07-16 00:06:05.689689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.603 [2024-07-16 00:06:05.689697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.603 [2024-07-16 00:06:05.689704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.603 [2024-07-16 00:06:05.693202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.603 [2024-07-16 00:06:05.702488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.603 [2024-07-16 00:06:05.702939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.603 [2024-07-16 00:06:05.702956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.603 [2024-07-16 00:06:05.702964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.603 [2024-07-16 00:06:05.703180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.603 [2024-07-16 00:06:05.703401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.603 [2024-07-16 00:06:05.703411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.603 [2024-07-16 00:06:05.703418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.603 [2024-07-16 00:06:05.706915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.603 [2024-07-16 00:06:05.716414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.603 [2024-07-16 00:06:05.717033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.603 [2024-07-16 00:06:05.717049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.603 [2024-07-16 00:06:05.717056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.603 [2024-07-16 00:06:05.717282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.603 [2024-07-16 00:06:05.717500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.603 [2024-07-16 00:06:05.717509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.603 [2024-07-16 00:06:05.717516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.603 [2024-07-16 00:06:05.721020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.603 [2024-07-16 00:06:05.730311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.603 [2024-07-16 00:06:05.730922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.603 [2024-07-16 00:06:05.730939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.603 [2024-07-16 00:06:05.730946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.603 [2024-07-16 00:06:05.731163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.603 [2024-07-16 00:06:05.731386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.603 [2024-07-16 00:06:05.731395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.603 [2024-07-16 00:06:05.731402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.603 [2024-07-16 00:06:05.734900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.603 [2024-07-16 00:06:05.744188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.603 [2024-07-16 00:06:05.744805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.603 [2024-07-16 00:06:05.744821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.603 [2024-07-16 00:06:05.744829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.603 [2024-07-16 00:06:05.745045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.603 [2024-07-16 00:06:05.745266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.603 [2024-07-16 00:06:05.745277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.603 [2024-07-16 00:06:05.745284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.603 [2024-07-16 00:06:05.748784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.603 [2024-07-16 00:06:05.752786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:50.603 [2024-07-16 00:06:05.758073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:50.603 [2024-07-16 00:06:05.758694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.603 [2024-07-16 00:06:05.758710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.603 [2024-07-16 00:06:05.758717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.603 [2024-07-16 00:06:05.758934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.603 [2024-07-16 00:06:05.759151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.603 [2024-07-16 00:06:05.759159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.603 [2024-07-16 00:06:05.759166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.603 [2024-07-16 00:06:05.762670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.603 [2024-07-16 00:06:05.771953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.603 [2024-07-16 00:06:05.772543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.603 [2024-07-16 00:06:05.772560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.603 [2024-07-16 00:06:05.772567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.603 [2024-07-16 00:06:05.772784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.603 [2024-07-16 00:06:05.773001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.603 [2024-07-16 00:06:05.773009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.603 [2024-07-16 00:06:05.773016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.603 [2024-07-16 00:06:05.776517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.603 Malloc0 00:29:50.603 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:50.604 00:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.604 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:50.604 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.604 [2024-07-16 00:06:05.785805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.604 [2024-07-16 00:06:05.786522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.604 [2024-07-16 00:06:05.786562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.604 [2024-07-16 00:06:05.786574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.604 [2024-07-16 00:06:05.786812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.604 [2024-07-16 00:06:05.787033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.604 [2024-07-16 00:06:05.787042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.604 [2024-07-16 00:06:05.787050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.865 [2024-07-16 00:06:05.790560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.865 [2024-07-16 00:06:05.799649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.865 [2024-07-16 00:06:05.800238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.865 [2024-07-16 00:06:05.800276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.865 [2024-07-16 00:06:05.800287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.865 [2024-07-16 00:06:05.800524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.865 [2024-07-16 00:06:05.800745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.865 [2024-07-16 00:06:05.800754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.865 [2024-07-16 00:06:05.800761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.865 [2024-07-16 00:06:05.804268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.865 [2024-07-16 00:06:05.813556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.865 [2024-07-16 00:06:05.814128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.865 [2024-07-16 00:06:05.814166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c5540 with addr=10.0.0.2, port=4420 00:29:50.865 [2024-07-16 00:06:05.814177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c5540 is same with the state(5) to be set 00:29:50.865 [2024-07-16 00:06:05.814422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c5540 (9): Bad file descriptor 00:29:50.865 [2024-07-16 00:06:05.814644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.865 [2024-07-16 00:06:05.814637] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.865 [2024-07-16 00:06:05.814653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.865 [2024-07-16 00:06:05.814661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.865 [2024-07-16 00:06:05.818162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:50.865 00:06:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 655936 00:29:50.865 [2024-07-16 00:06:05.827464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.865 [2024-07-16 00:06:05.863952] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:00.866 00:30:00.866 Latency(us) 00:30:00.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.866 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:00.866 Verification LBA range: start 0x0 length 0x4000 00:30:00.866 Nvme1n1 : 15.01 8394.03 32.79 9812.81 0.00 7005.11 781.65 18022.40 00:30:00.866 =================================================================================================================== 00:30:00.866 Total : 8394.03 32.79 9812.81 0.00 7005.11 781.65 18022.40 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:00.866 rmmod nvme_tcp 00:30:00.866 rmmod nvme_fabrics 00:30:00.866 rmmod nvme_keyring 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 657370 ']' 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 657370 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@942 -- # '[' -z 657370 ']' 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # kill -0 657370 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # uname 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 657370 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 657370' 00:30:00.866 killing process with pid 657370 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@961 -- # kill 657370 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # wait 657370 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.866 00:06:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.809 00:06:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:01.809 00:30:01.809 real 0m28.638s 00:30:01.809 user 1m2.872s 00:30:01.809 sys 0m7.782s 00:30:01.809 00:06:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:01.809 00:06:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.809 ************************************ 00:30:01.809 END TEST nvmf_bdevperf 00:30:01.809 ************************************ 00:30:01.809 00:06:16 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:30:01.809 00:06:16 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:01.809 00:06:16 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:30:01.809 00:06:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:30:01.809 00:06:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.809 ************************************ 00:30:01.809 START TEST nvmf_target_disconnect 00:30:01.809 ************************************ 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:01.809 * Looking for test storage... 00:30:01.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:01.809 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:01.810 00:06:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:09.947 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:09.948 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:09.948 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:09.948 Found net devices under 0000:31:00.0: cvl_0_0 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:09.948 Found net devices under 0000:31:00.1: cvl_0_1 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.948 00:06:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.948 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.948 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.948 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:09.948 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:10.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:30:10.210 00:30:10.210 --- 10.0.0.2 ping statistics --- 00:30:10.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.210 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:30:10.210 00:30:10.210 --- 10.0.0.1 ping statistics --- 00:30:10.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.210 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:10.210 ************************************ 00:30:10.210 START TEST nvmf_target_disconnect_tc1 00:30:10.210 ************************************ 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc1 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # local es=0 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.210 [2024-07-16 00:06:25.382339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.210 [2024-07-16 00:06:25.382422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe1d4b0 with addr=10.0.0.2, port=4420 00:30:10.210 [2024-07-16 00:06:25.382453] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:10.210 [2024-07-16 00:06:25.382465] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:10.210 [2024-07-16 00:06:25.382474] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:10.210 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:10.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:10.210 Initializing NVMe Controllers 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # es=1 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:30:10.210 00:30:10.210 real 0m0.126s 00:30:10.210 user 0m0.048s 00:30:10.210 sys 0m0.075s 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:10.210 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:10.210 ************************************ 00:30:10.210 END TEST nvmf_target_disconnect_tc1 00:30:10.210 ************************************ 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:10.471 ************************************ 00:30:10.471 START TEST nvmf_target_disconnect_tc2 00:30:10.471 ************************************ 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc2 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=664224 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 664224 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 664224 ']' 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:30:10.471 00:06:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.471 [2024-07-16 00:06:25.540487] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:30:10.471 [2024-07-16 00:06:25.540544] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.471 [2024-07-16 00:06:25.639019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.732 [2024-07-16 00:06:25.732851] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.732 [2024-07-16 00:06:25.732913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.732 [2024-07-16 00:06:25.732922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.732 [2024-07-16 00:06:25.732929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.732 [2024-07-16 00:06:25.732935] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.732 [2024-07-16 00:06:25.733098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:10.732 [2024-07-16 00:06:25.733331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:10.732 [2024-07-16 00:06:25.733651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:10.732 [2024-07-16 00:06:25.733655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.303 Malloc0 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:11.303 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.303 [2024-07-16 00:06:26.412592] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.304 [2024-07-16 00:06:26.452987] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=664571 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:11.304 00:06:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:13.873 00:06:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 664224 00:30:13.873 00:06:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 [2024-07-16 00:06:28.486891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Read completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 Write completed with error (sct=0, sc=8) 00:30:13.873 starting I/O failed 00:30:13.873 [2024-07-16 00:06:28.487136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.873 [2024-07-16 00:06:28.487494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.873 [2024-07-16 00:06:28.487515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.873 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.487881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.487892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.488271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.488288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.488682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.488693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.489059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.489070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.489441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.489454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.489835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.489847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.490154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.490165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.490537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.490550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.490916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.490928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.491250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.491263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.491624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.491635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.491951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.491962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.492131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.492146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.492495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.492507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.493220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.493254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.493645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.493657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.493948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.493959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.494169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.494181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.494519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.494531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.494909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.494922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.495310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.495321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.495766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.495786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.496169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.496180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.496524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.496536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.496796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.496807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.497189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.497199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.497484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.497494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.497792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.497803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.498013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.498026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.498167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.498178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.498564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.498575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.498966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.498977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.499243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.874 [2024-07-16 00:06:28.499254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.874 qpair failed and we were unable to recover it. 00:30:13.874 [2024-07-16 00:06:28.499567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.499577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.499694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.499704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.499958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.499970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.500291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.500303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.500537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.500547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.500768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.500782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.501096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.501107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.501373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.501384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.501597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.501607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.501917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.501929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.502164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.502175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.502387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.502398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.502585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.502598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.502824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.502836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.503207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.503217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.503574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.503585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.503919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.503930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.504177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.504187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.504544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.504556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.504896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.504907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.505163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.505172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.505524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.505535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.505889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.505899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.506268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.506280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.506571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.506581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.506967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.506978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.507314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.507325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.507567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.507577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.507926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.507937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.508077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.508088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.508413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.508423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.508747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.508758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.509174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.875 [2024-07-16 00:06:28.509185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.875 qpair failed and we were unable to recover it. 00:30:13.875 [2024-07-16 00:06:28.509537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.509549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.509897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.509907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.510287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.510298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.510654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.510664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.510998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.511008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.511348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.511359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.511700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.511711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.512055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.512066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.512555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.512567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.512877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.512889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.513280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.513291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.513624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.513635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.513972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.513984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.514315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.514325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.514674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.514686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.515028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.515038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.515385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.515396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.515602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.515612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.515995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.516005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.516364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.516375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.516608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.516618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.516866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.516876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.517251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.517261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.517594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.517604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.517974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.517984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.518360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.518370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.518738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.518749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.519077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.519088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.519445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.519456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.519815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.519826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.520149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.520161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.520512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.520523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.520893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.876 [2024-07-16 00:06:28.520903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.876 qpair failed and we were unable to recover it. 00:30:13.876 [2024-07-16 00:06:28.521278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.521288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.521625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.521636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.522007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.522017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.522376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.522388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.522711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.522722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.523055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.523065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.523479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.523489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.523793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.523804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.524016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.524027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.524270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.524281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.524623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.524633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.525008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.525018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.525392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.525403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.525775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.525785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.526155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.526165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.526533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.526544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.526909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.526919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.527299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.527309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.527615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.527625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.527965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.527976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.528324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.528334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.528714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.528724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.529019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.529030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.529411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.529424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.529797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.529809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.530181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.530191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.530419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.530430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.530657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.530668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.531000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.531010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.531388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.531399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.531772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.531783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.532093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.532104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.532437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.877 [2024-07-16 00:06:28.532448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.877 qpair failed and we were unable to recover it. 00:30:13.877 [2024-07-16 00:06:28.532854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.532864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.533163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.533174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.533516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.533527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.533875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.533885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.534181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.534191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.534525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.534536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.534909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.534920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.535289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.535300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.535682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.535693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.536072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.536082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.536424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.536436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.536805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.536816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.537154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.537164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.537503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.537513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.537876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.537887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.538225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.538249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.538603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.538613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.538986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.538996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.539359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.539372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.539700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.539710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.540045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.540055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.540389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.540400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.540790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.540800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.541137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.541148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.541503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.541514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.541857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.541868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.878 [2024-07-16 00:06:28.542074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.878 [2024-07-16 00:06:28.542086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.878 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.542421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.542432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.542783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.542794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.543132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.543143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.543492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.543504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.543876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.543887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.544223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.544238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.544595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.544606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.544975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.544987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.545328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.545339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.545744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.545755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.546123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.546135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.546348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.546360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.546609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.546620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.546962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.546972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.547333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.547344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.547674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.547685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.548060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.548070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.548437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.548448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.548803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.548813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.549185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.549196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.549542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.549553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.549898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.549909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.550277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.550289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.550660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.550671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.551017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.551027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.551396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.551407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.551754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.551764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.552113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.552123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.552458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.552469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.552812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.552822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.553170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.553181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.553523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.553535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.553891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.879 [2024-07-16 00:06:28.553903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.879 qpair failed and we were unable to recover it. 00:30:13.879 [2024-07-16 00:06:28.554123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.554134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.554481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.554492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.554838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.554848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.555261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.555272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.555633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.555644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.556041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.556051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.556418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.556429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.556797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.556808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.557195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.557206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.557580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.557591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.557834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.557844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.558189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.558200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.558569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.558580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.558786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.558797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.559108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.559118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.559464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.559475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.559847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.559859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.560214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.560225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.560559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.560570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.560940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.560951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.561145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.561156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.561505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.561516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.561941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.561952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.562287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.562298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.562645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.562656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.563000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.563010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.563345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.563361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.563720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.563730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.564129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.564139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.564486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.564497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.564847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.564857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.565245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.565256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.565606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.880 [2024-07-16 00:06:28.565616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.880 qpair failed and we were unable to recover it. 00:30:13.880 [2024-07-16 00:06:28.565918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.565928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.566315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.566326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.566694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.566704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.567052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.567062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.567417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.567428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.567801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.567811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.568164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.568175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.568554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.568564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.568943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.568955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.569301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.569312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.569661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.569671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.569927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.569936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.570277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.570287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.570552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.570562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.570788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.570798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.571156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.571166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.571515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.571527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.571902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.571913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.572257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.572268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.572690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.572700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.573072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.573085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.573431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.573442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.573788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.573799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.574169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.574180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.574527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.574537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.574885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.574896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.575153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.575164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.575517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.575528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.575874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.575886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.576255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.576266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.576634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.576645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.576992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.577002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.881 qpair failed and we were unable to recover it. 00:30:13.881 [2024-07-16 00:06:28.577377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.881 [2024-07-16 00:06:28.577387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.577737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.577748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.578111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.578121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.578483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.578494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.578707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.578718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.579065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.579075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.579540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.579551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.579907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.579917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.580345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.580355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.580693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.580703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.581050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.581060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.581300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.581311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.581681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.581691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.582035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.582047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.582396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.582406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.582724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.582734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.583087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.583097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.583451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.583462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.583833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.583843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.584017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.584028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.584342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.584353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.584724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.584735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.585099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.585110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.585483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.585494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.585862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.585873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.586226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.586241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.586583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.586594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.586968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.586979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.587369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.587380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.587766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.587777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.588107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.588117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.588485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.588496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.588856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.588866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.882 [2024-07-16 00:06:28.589237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.882 [2024-07-16 00:06:28.589247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.882 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.589467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.589477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.589815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.589825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.590032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.590043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.590363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.590374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.590717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.590727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.590991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.591001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.591356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.591367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.591753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.591764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.591986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.591997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.592303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.592314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.592660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.592672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.593012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.593023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.593372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.593382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.593727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.593737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.594109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.594119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.594487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.594497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.594821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.594832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.595204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.595214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.595571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.595582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.595926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.595936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.596337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.596348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.596745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.596755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.597103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.597115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.597421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.597432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.597749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.597759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.598122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.598132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.598482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.598493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.598767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.598777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.599126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.883 [2024-07-16 00:06:28.599136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.883 qpair failed and we were unable to recover it. 00:30:13.883 [2024-07-16 00:06:28.599441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.599451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.599796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.599808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.600148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.600159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.600572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.600584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.600936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.600947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.601297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.601307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.601685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.601695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.602053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.602063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.602409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.602421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.602830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.602841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.603179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.603189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.603528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.603539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.603909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.603919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.604215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.604226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.604591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.604602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.604984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.604994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.605338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.605349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.605690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.605700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.606071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.606081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.606431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.606444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.606779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.606792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.607054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.607064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.607422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.607432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.607799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.607809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.608165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.608175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.608533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.608544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.608889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.608899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.609215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.609227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.609574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.609584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.609977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.609987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.610355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.610366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.884 [2024-07-16 00:06:28.610695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.884 [2024-07-16 00:06:28.610706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.884 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.611051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.611061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.611405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.611415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.611762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.611773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.612116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.612126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.612480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.612491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.612841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.612852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.613151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.613162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.613501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.613512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.613859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.613870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.614241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.614252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.614622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.614633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.614981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.614991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.615335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.615346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.615731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.615741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.616081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.616092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.616438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.616451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.616821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.616831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.617171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.617182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.617528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.617538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.617913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.617924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.618315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.618326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.618684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.618695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.619067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.619078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.619417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.619428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.619768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.619778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.620149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.620161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.620508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.620519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.620864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.620875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.621244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.621256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.621631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.621641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.621993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.622004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.622384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.622394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.622766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.885 [2024-07-16 00:06:28.622776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.885 qpair failed and we were unable to recover it. 00:30:13.885 [2024-07-16 00:06:28.623116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.623127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.623488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.623499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.623845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.623855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.624201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.624211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.624577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.624588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.624936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.624947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.625295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.625305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.625674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.625684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.626106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.626117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.626460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.626471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.626810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.626821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.627166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.627177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.627561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.627571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.627914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.627925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.628345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.628355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.628613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.628623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.628991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.629001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.629351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.629362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.629714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.629724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.630102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.630112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.630490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.630501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.630849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.630860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.631239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.631250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.631582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.631595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.632015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.632026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.632373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.632384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.632735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.632746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.633088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.633098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.633460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.633471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.633822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.633834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.634191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.634202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.634571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.634581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.634929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.634940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.886 qpair failed and we were unable to recover it. 00:30:13.886 [2024-07-16 00:06:28.635287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.886 [2024-07-16 00:06:28.635297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.635642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.635653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.636043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.636053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.636402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.636413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.636791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.636801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.637146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.637156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.637442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.637452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.637821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.637831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.638241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.638253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.638616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.638626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.638994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.639005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.639339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.639351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.639704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.639716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.640086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.640096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.640464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.640475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.640673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.640684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.641049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.641060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.641404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.641417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.641765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.641776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.642146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.642157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.642507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.642518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.642867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.642877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.643192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.643202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.643562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.643573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.643883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.643894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.644262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.644273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.644587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.644598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.644950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.644961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.645336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.645347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.645699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.645710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.646055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.646066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.646439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.646450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.646796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.646806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.647162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.887 [2024-07-16 00:06:28.647172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.887 qpair failed and we were unable to recover it. 00:30:13.887 [2024-07-16 00:06:28.647547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.647558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.647905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.647917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.648287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.648298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.648680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.648690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.649036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.649046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.649392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.649404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.649733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.649744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.650099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.650110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.650489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.650500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.650873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.650884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.651236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.651249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.651581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.651593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.651964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.651976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.652361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.652372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.652727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.652738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.653110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.653121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.653493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.653503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.653853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.653864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.654219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.654243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.654563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.654574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.654923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.654934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.655300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.655311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.655689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.655699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.656051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.656061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.656431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.656442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.656777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.656788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.657174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.657185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.657552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.657563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.657909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.657919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.658269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.658281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.658623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.658633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.658975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.658985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.659333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.888 [2024-07-16 00:06:28.659344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.888 qpair failed and we were unable to recover it. 00:30:13.888 [2024-07-16 00:06:28.659677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.659687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.660036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.660046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.660394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.660405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.660710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.660721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.661075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.661085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.661434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.661445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.661865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.661876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.662212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.662222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.662560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.662572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.662874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.662885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.663199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.663210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.663529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.663540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.663783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.663793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.664052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.664064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.664416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.664426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.664796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.664807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.665153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.665165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.665511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.665522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.665894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.665904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.666287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.666297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.666548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.666558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.667014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.667024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.667226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.667240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.667598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.667608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.667935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.667945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.668281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.668293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.668662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.668672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.889 [2024-07-16 00:06:28.668873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.889 [2024-07-16 00:06:28.668884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.889 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.669204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.669214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.669494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.669507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.669875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.669885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.670234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.670245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.670593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.670604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.670973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.670984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.671333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.671344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.671697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.671707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.672074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.672085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.672435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.672446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.672619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.672630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.672950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.672962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.673308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.673319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.673666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.673677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.674042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.674053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.674394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.674405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.674757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.674767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.675143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.675155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.675419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.675429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.675778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.675788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.676164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.676174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.676580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.676591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.676936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.676946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.677204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.677214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.677549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.677559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.677909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.677919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.678304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.678315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.678651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.678662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.679001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.679011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.679378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.679389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.679730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.679740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.679983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.679993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.890 [2024-07-16 00:06:28.680334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.890 [2024-07-16 00:06:28.680345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.890 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.680701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.680711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.681061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.681071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.681439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.681450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.681798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.681809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.682156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.682166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.682600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.682611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.683030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.683040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.683427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.683437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.683745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.683756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.684107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.684117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.684480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.684491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.684891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.684904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.685244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.685255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.685636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.685646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.686021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.686031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.686380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.686391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.686789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.686799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.687167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.687179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.687527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.687537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.687920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.687931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.688299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.688310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.688679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.688690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.689045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.689055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.689374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.689384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.689763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.689772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.690018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.690028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.690403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.690413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.690777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.690788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.691140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.691151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.691505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.691515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.691866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.691877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.692266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.692277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.692622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.891 [2024-07-16 00:06:28.692633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.891 qpair failed and we were unable to recover it. 00:30:13.891 [2024-07-16 00:06:28.692979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.692989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.693377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.693388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.693762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.693772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.694119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.694131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.694498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.694509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.694894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.694904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.695256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.695267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.695622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.695633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.696004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.696014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.696368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.696380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.696721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.696731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.697103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.697113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.697487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.697498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.697837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.697849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.698228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.698241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.698587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.698598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.698946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.698956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.699217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.699226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.699591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.699601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.699955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.699966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.700341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.700352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.700692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.700703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.701091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.701101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.701466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.701477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.701830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.701840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.702187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.702197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.702566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.702577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.702923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.702933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.703287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.703298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.703641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.703651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.703992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.704003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.704344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.704356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.704726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.892 [2024-07-16 00:06:28.704736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.892 qpair failed and we were unable to recover it. 00:30:13.892 [2024-07-16 00:06:28.705080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.705091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.705469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.705479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.705852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.705862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.706211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.706221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.706608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.706619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.706986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.706997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.707453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.707491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.707885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.707899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.708272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.708284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.708641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.708652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.709001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.709012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.709389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.709400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.709771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.709781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.710025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.710040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.710408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.710419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.710825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.710836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.711186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.711196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.711553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.711564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.711916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.711926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.712280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.712291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.712624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.712635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.712961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.712972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.713319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.713330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.713571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.713581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.713926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.713936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.714284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.714296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.714677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.714687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.715040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.715050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.715398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.715409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.715777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.715788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.716154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.716165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.716508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.716519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.893 [2024-07-16 00:06:28.716895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.893 [2024-07-16 00:06:28.716905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.893 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.717118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.717130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.717491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.717502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.717872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.717882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.718224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.718239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.718579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.718590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.718961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.718971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.719363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.719374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.719700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.719713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.720083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.720094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.720462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.720473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.720802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.720812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.721192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.721204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.721551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.721562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.721908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.721918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.722290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.722301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.722669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.722679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.722891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.722902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.723274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.723284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.723551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.723561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.723874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.723884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.724258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.724269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.724641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.724652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.725023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.725035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.725380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.725391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.725726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.725737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.726085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.726095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.726439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.726451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.894 qpair failed and we were unable to recover it. 00:30:13.894 [2024-07-16 00:06:28.726819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.894 [2024-07-16 00:06:28.726830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.727087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.727097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.727465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.727477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.727822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.727832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.728207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.728217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.728417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.728429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.728711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.728721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.729066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.729078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.729420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.729431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.729771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.729781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.730125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.730136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.730483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.730493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.730867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.730878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.731219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.731233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.731564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.731574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.731922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.731933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.732299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.732310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.732676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.732686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.733031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.733042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.733395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.733406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.733776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.733786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.734140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.734150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.734499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.734511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.734869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.734881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.735251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.735262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.735633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.735643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.735991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.736001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.736350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.736362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.736733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.736743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.736997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.737007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.737309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.737320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.737685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.737695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.738077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.738087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.895 [2024-07-16 00:06:28.738428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.895 [2024-07-16 00:06:28.738439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.895 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.738789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.738799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.739180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.739190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.739580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.739590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.739943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.739954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.740303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.740314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.740651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.740661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.741033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.741044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.741393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.741403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.741749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.741760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.742105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.742116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.742536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.742548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.742887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.742897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.743244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.743255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.743578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.743587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.743958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.743968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.744317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.744328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.744677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.744688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.745075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.745085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.745465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.745476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.745825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.745836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.746185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.746196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.746543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.746554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.746931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.746942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.747286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.747297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.747646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.747656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.748001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.748011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.748332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.748343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.748708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.748718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.749071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.749082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.749443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.749453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.749808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.749819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.750052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.750062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.750443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-16 00:06:28.750455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-16 00:06:28.750532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.750541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.750901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.750912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.751286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.751298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.751644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.751655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.751996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.752007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.752354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.752365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.752702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.752712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.753094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.753106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.753447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.753460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.753651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.753663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.754004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.754014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.754362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.754372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.754714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.754724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.755072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.755082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.755432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.755442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.755801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.755812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.756153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.756164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.756511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.756522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.756861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.756872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.757219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.757242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.757607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.757617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.757954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.757965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.758344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.758355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.758673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.758683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.759032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.759042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.759395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.759406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.759744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.759755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.760100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.760111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.760517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.760528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.760877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.760888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.761263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.761274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.761616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.761626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.761967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-16 00:06:28.761978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-16 00:06:28.762266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.762277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.762629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.762639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.762966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.762978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.763330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.763341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.763683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.763694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.764064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.764076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.764434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.764445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.764797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.764808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.765157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.765167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.765505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.765516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.765862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.765873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.766176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.766187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.766537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.766548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.766913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.766924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.767284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.767294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.767675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.767685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.768032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.768042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.768412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.768423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.768771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.768782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.769063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.769074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.769437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.769447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.769796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.769807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.770125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.770135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.770497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.770508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.770844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.770855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.771248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.771259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.771689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.771700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.772045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.772056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.772394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.772405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.772772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.772783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.773125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.773137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.773501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.773512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.773758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-16 00:06:28.773768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-16 00:06:28.774134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.774144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.774540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.774551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.774893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.774903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.775249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.775259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.775587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.775598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.775948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.775957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.776331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.776342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.776709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.776720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.777092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.777103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.777471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.777482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.777781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.777791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.778065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.778075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.778324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.778334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.778760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.778770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.779103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.779114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.779467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.779477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.779852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.779862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.780210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.780221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.780564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.780576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.780915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.780925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.781189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.781198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.781536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.781547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.781890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.781900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.782253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.782264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.782618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.782628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.782969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.782979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.783344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.783355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.783606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.783616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-16 00:06:28.783987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-16 00:06:28.783999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.784254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.784265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.784624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.784634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.784982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.784992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.785363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.785373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.785716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.785726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.786089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.786099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.786396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.786406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.786757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.786767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.787123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.787136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.787460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.787472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.787828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.787838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.788067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.788078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.788415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.788425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.788789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.788799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.789186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.789196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.789540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.789551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.789894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.789905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.790237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.790249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.790616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.790626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.790967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.790977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.791324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.791335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.791713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.791724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.792072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.792084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.792407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.792417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.792773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.792783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.793132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.793142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.793355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.793367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.793739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.793750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.794095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.794106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.794480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.794491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.794653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.794664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.794982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.794993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-16 00:06:28.795335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-16 00:06:28.795346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.795741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.795752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.796097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.796108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.796546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.796559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.796898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.796909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.797254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.797266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.797580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.797591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.797944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.797954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.798300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.798312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.798657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.798668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.799006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.799016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.799385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.799396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.799716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.799727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.800072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.800082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.800433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.800444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.800841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.800852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.801189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.801200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.801543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.801554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.801904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.801915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.802282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.802293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.802662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.802672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.803017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.803028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.803378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.803388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.803762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.803773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.804121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.804131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.804494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.804506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.804851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.804862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.805233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.805243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.805569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.805578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.805925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.805935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.806284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.806297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.806657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.806668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.807032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.807043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-16 00:06:28.807389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-16 00:06:28.807399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.807738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.807750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.808120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.808130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.808492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.808503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.808851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.808862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.809200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.809211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.809528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.809539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.809918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.809930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.810129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.810141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.810502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.810514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.810886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.810897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.811245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.811257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.811622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.811632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.811979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.811989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.812360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.812371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.812747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.812757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.813106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.813116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.813480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.813490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.813858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.813868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.814214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.814224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.814574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.814584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.814923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.814934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.815336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.815347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.815657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.815668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.815961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.815971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.816320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.816331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.816698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.816709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.817072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.817083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.817389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.817400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.817756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.817766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.818138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.818148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.818385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.818396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.818741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.818751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-16 00:06:28.819094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-16 00:06:28.819105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.819464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.819476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.819823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.819833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.820183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.820193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.820487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.820497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.820857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.820867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.821216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.821227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.821601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.821612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.821951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.821961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.822332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.822344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.822707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.822717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.823061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.823072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.823415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.823426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.823803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.823813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.824168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.824179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.824467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.824478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.824841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.824852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.825222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.825235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.825441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.825452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.825799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.825809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.826159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.826169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.826518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.826529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.826875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.826885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.827277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.827287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.827637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.827648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.828047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.828057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.828349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.828360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.828710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.828720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.829065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.829075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-16 00:06:28.829437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-16 00:06:28.829448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.829796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.829806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.830155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.830166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.830581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.830593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.830953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.830965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.831313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.831333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.831697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.831708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.831938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.831948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.832333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.832344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.832683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.832694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.833065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.833076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.833416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.833426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.833799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.833810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.834155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.834166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.834513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.834524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.834868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.834879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.835249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.835260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.835459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.835471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.835783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.835793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.836134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.836144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.836469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.836481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.836848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.836858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.837210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.837220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.837470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.837481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.837850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.837861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.838205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.838217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.838563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.838574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.838927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.838938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.839317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.839328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.839689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.839699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.840046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.840059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.840400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.840411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.840786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.840796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.841139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-16 00:06:28.841149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-16 00:06:28.841513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.841524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.841873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.841884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.842226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.842240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.842578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.842589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.842936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.842947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.843294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.843305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.843672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.843682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.844029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.844040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.844385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.844397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.844755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.844765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.845143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.845155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.845584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.845595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.845931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.845943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.846300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.846310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.846678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.846688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.847032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.847043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.847395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.847405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.847756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.847767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.848137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.848147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.848415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.848425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.848782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.848792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.849139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.849150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-16 00:06:28.849491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-16 00:06:28.849502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.849891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.849901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.850256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.850267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.850610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.850621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.850991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.851002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.851351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.851361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.851707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.851719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.852066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.852077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.852401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.852411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.852751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.852762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.853143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.853154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.853500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.853511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.853849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.853860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.854199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.854209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.854556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.854567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.854912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.854922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.855141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.855151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.855472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.855483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.855833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.855844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.856150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.856161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.856535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.856546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.856893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.856904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.857260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.857270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.857619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.857630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.857997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.858007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.858354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.858364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.858713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.858723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.858983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.858993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.859210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.859222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.859591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.859602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.859949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.859960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.860313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.860324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.860697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.860707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-16 00:06:28.861055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-16 00:06:28.861066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.861311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.861322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.861660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.861670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.862047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.862057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.862370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.862381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.862797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.862808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.863159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.863170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.863501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.863513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.863860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.863870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.864222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.864238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.864562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.864573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.864947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.864958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.865307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.865318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.865663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.865675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.866013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.866023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.866348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.866359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.866682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.866692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.867037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.867048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.867393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.867403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.867778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.867788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.868136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.868146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.868494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.868505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.868849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.868861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.869074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.869086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.869441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.869453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.869798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.869809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.870117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.870129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.870452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.870463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.870806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.870817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.871171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.871183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.871486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.871496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.871849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-16 00:06:28.871860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-16 00:06:28.872206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.872217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.872606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.872616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.872963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.872975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.873346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.873357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.873737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.873749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.874092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.874103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.874438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.874450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.874824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.874834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.875185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.875196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.875544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.875556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.875900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.875911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.876279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.876289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.876644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.876655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.877001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.877012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.877357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.877368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.877736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.877747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.878103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.878113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.878479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.878489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.878689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.878704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.879045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.879055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.879394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.879405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.879753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.879764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.880111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.880121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.880483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.880493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.880841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.880853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.881199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.881210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.881635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.881646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.882015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.882026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.882363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.882374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.882719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.882730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.883081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.883092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-16 00:06:28.883462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-16 00:06:28.883475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.883678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.883689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.884052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.884063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.884405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.884415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.884792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.884803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.885194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.885205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.885554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.885566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.885914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.885925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.886297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.886307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.886675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.886685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.887032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.887042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.887391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.887402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.887786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.887796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.888148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.888159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.888526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.888536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.888840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.888852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.889210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.889221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.889476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.889486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.889740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.889750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.890097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.890107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.890470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.890481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.890830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.890840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.891187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.891197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.891619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.891630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.891999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.892009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.892353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.892364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.892711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.892722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.893068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.893078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.893406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.893418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.893756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.893767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.894117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.894128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.894494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.894504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.894882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.894892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-16 00:06:28.895239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-16 00:06:28.895250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.895606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.895618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.895965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.895976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.896348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.896359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.896734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.896744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.897091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.897101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.897443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.897456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.897833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.897843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.898205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.898216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.898560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.898571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.898917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.898928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.899296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.899308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.899518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.899529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.899867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.899877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.900225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.900239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.900579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.900589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.900972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.900982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.901331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.901342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.901712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.901722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.902096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.902108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.902470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.902480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.902828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.902839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.903188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.903198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.903564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.903575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.903921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.903931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.904277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.904288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.904626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.904637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.905003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.905014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.905359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-16 00:06:28.905370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-16 00:06:28.905717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.905728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.906075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.906086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.906444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.906454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.906665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.906676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.907023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.907035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.907329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.907339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.907715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.907727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.908074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.908084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.908432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.908444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.908782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.908792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.909119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.909129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.909499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-16 00:06:28.909510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-16 00:06:28.909909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.909919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.910261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.910273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.910616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.910626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.910972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.910982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.911375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.911386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.911746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.911756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.912125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.912135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.912513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.912524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.912872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.912883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.913223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.913238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.913584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.913594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.913829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.913839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.914097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.914108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.914522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.914532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.914908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.914919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.915267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.915278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.915643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.915653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.916000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.916011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.916376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.916386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.916758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.916768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.917110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.917120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.917332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.917345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.917725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.917735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.918081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.918092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.918437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.918449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.918817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.918827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.919196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.919206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.919544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.919555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.919903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.919913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.920260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.920271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.920605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.920615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.920995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.921005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.921356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.921366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.921714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.921724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.922095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.922106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-16 00:06:28.922446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-16 00:06:28.922458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.922804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.922814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.923160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.923170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.923512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.923523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.923868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.923880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.924228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.924242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.924476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.924486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.924737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.924747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.925116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.925126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.925493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.925503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.925849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.925860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.926099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.926108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.926475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.926486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.926833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.926844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.927275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.927286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.927628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.927639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.927989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.928000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.928339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.928350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.928715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.928726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.929105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.929115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.929489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.929500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.929844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.929854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.930203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.930213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.930615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.930625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.930841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.930853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.931200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.931210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.931410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.931421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.931770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.931780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.932139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.932150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.932503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.932514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.932860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.932871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.933240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.933251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.933608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.933618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.933966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.933976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.934328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.934338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.934715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.934725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-16 00:06:28.935077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-16 00:06:28.935088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.935288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.935306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.935634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.935644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.936009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.936019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.936384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.936395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.936749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.936760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.937109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.937119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.937319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.937330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.937659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.937670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.938023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.938033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.938379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.938390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.938687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.938697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.939039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.939049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.939435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.939445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.939791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.939802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.940173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.940184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.940565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.940576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.940914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.940925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.941270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.941283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.941633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.941644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.941915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.941925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.942278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.942289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.942644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.942654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.943020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.943031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.943383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.943397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.943737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.943749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.944117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.944127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.944481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.944494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.944658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.944669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.945022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.945033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.945378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.945389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.945707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.945717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.946068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.946079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.946415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.946425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.946818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.946828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.947166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.947177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.947568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-16 00:06:28.947578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-16 00:06:28.947923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.947934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.948279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.948291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.948637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.948647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.949001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.949012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.949361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.949371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.949725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.949735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.950102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.950112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.950470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.950481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.950731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.950743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.950988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.950997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.951313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.951324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.951563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.951574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.951875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.951885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.952241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.952251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.952574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.952585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.952931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.952941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.953243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.953255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.953623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.953633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.953999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.954009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.954347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.954359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.954588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.954597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.954936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.954947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.955335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.955346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.955712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.955723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.956085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.956096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.956436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.956446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.956817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.956828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.957175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.957185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.957607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.957618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.957957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.957968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.958326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.958337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.958709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.958720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.959014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.959025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.959376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.959386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.959704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.959715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.960062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-16 00:06:28.960075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-16 00:06:28.960422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.960432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.960780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.960790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.961156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.961167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.961517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.961527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.961872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.961883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.962227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.962287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.962642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.962653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.963005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.963015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.963360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.963370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.963715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.963726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.964091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.964102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.964467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.964477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.964823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.964833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.965177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.965189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.965559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.965570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.965772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.965782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.966142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.966153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.966555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.966566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.966806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.966816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.967161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.967171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.967524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.967534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.967734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.967744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.968091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.968102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.968468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.968478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.968818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.968828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.969172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.969182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.969556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.969566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.969791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.969802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.970151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.970161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.970512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.970523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.970892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-16 00:06:28.970902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-16 00:06:28.971241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.971252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.971596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.971608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.971964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.971975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.972340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.972351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.972682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.972693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.973038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.973049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.973395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.973406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.973767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.973777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.974127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.974138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.974505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.974516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.974862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.974873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.975243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.975254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.975609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.975619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.975964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.975974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.976317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.976329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.976698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.976708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.977061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.977072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.977416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.977427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.977778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.977788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.978093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.978104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.978525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.978536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.978877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.978888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.979235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.979246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.979595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.979605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.979950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.979962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.980414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.980452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.980769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.980782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.981140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.981152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.981506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.981518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.981875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.981886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.982236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.982247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.982586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.982598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.982945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.982956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.983342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.983353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.983708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.983719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.984090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.984101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.984471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.984485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.984836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.984847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-16 00:06:28.985190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-16 00:06:28.985200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.985545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.985557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.985907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.985918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.986269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.986280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.986591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.986601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.986971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.986981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.987325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.987336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.987700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.987711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.988058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.988069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.988418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.988430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.988776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.988786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.989155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.989165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.989538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.989550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.989899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.989909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.990237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.990248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.990582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.990592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.990966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.990976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.991327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.991338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.991686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.991697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.991945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.991956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.992329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.992340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.992588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.992598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.992948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.992958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.993308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.993319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.993626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.993636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.993984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.993997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.994365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.994376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.994722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.994732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.995110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.995122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.995378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.995389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.995775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.995785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.996038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.996048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.996422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.996433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.996781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.996791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.997160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.997171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.997525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.997536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.997771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.997782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.998128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.998139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.998484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.998495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.998831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.998842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.999211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.999222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-16 00:06:28.999555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-16 00:06:28.999565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:28.999901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:28.999913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.000254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.000266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.000655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.000666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.001015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.001026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.001401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.001412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.001780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.001791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.002161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.002172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.002389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.002399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.002749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.002759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.003184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.003194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.003551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.003562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.003911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.003923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.004290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.004301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.004670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.004681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.004985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.004996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.005351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.005363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.005736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.005747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.006087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.006097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.006384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.006394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.006736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.006746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.007103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.007114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.007434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.007445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.007809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.007820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.008207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.008219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.008591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.008602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.008969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.008979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.009329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.009339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.009697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.009707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.010077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.010088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.010390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.010400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.010783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.010794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.011138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.011148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.011477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.011489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.011871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.011881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.012243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.012254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.012605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.012615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.012984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.012994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.013333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.013345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.013689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.013699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.013920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.013930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.014259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-16 00:06:29.014269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-16 00:06:29.014637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.014648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.014992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.015002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.015343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.015353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.015720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.015730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.016041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.016053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.016407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.016418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.016765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.016775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.017141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.017152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.017489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.017500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.017706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.017719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.018068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.018082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.018409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.018420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.018774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.018785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.019140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.019151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.019390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.019401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.019770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.019781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.020151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.020162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.020404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.020416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.020761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.020772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.021143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.021154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.021472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.021484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.021825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.021836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.022078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.022089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.022431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.022441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.022828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.022838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.023168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.023179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.023518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.023528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.023902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.023912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.024270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.024282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.024643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.024654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.025002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.025012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.025345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.025356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.025711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.025721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.026069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.026079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.026354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.026364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-16 00:06:29.026701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-16 00:06:29.026711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.027058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.027068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.027368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.027380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.027684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.027694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.028059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.028070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.028415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.028426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.028772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.028782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.029026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.029036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.029386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.029397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.029753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.029765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.030106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.030116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.030482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.030493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.030842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.030852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.031199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.031210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.031576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.031587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.031977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.031988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.032211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.032221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.032591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.032601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.032948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.032958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.033263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.033273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.033597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.033606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.033976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.033986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.034338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.034350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.034706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.034717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-16 00:06:29.035088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-16 00:06:29.035100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:14.254 [2024-07-16 00:06:29.035480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.254 [2024-07-16 00:06:29.035492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.254 qpair failed and we were unable to recover it. 00:30:14.254 [2024-07-16 00:06:29.035832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.035844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.036169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.036179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.036536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.036547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.036952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.036967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.037291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.037302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.037657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.037668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.038004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.038015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.038381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.038392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.038744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.038755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.039109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.039118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.039505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.039516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.039848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.039859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.040100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.040110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.040478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.040489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.040856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.040867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.041248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.041260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.041624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.041634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.041835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.041846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.042217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.042227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.042599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.042611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.042954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.042964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.043303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.043314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.043638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.043648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.043996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.044006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.044378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.044389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.044611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.044622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.044991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.045001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.045337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.045348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.045700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.045711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.046061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.046072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.046417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.046428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.046766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.046777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.047133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.047144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.047507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.047519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.047773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.047783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.048127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.048138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.048487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.048499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.048834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.048845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.049213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.049225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.049618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.049630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.049974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.049985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.050332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.050343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.050716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.050727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.051026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.051037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.051385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.051397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.051769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.051779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.051996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.052005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.052375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.052386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.052733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.052743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.053109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.053121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.053563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.053573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.053913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.053924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.054306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.054317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.054662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.054673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.055042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.055052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.055437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.055448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.055797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.055808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.056151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.056162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.056502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.056512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.056859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.056870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.057214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.057225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.057587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.057597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.057963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.057974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.058324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.058335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.058681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.058691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.059043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.059053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.059414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.059425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.059826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.255 [2024-07-16 00:06:29.059836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.255 qpair failed and we were unable to recover it. 00:30:14.255 [2024-07-16 00:06:29.060185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.060196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.060550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.060561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.060928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.060939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.061290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.061302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.061675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.061687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.062039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.062049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.062423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.062435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.062780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.062790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.063134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.063143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.063506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.063517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.063886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.063897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.064254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.064266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.064712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.064723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.065056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.065066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.065548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.065586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.065934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.065948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.066297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.066308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.066562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.066573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.066947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.066959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.067310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.067321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.067670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.067681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.068062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.068073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.068444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.068455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.068801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.068812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.069156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.069167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.069512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.069524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.069895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.069906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.070257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.070268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.070633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.070644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.070995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.071006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.071385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.071403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.071725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.071736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.072073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.072083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.072427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.072439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.072824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.072834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.073173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.073185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.073561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.073572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.073919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.073930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.074298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.074310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.074672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.074684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.075029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.075040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.075394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.075404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.075737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.075747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.076094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.076106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.076480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.076491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.080239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.080262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.080628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.080642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.080912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.080923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.081301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.081312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.081700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.081711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.081934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.081946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.082361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.082373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.082722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.082733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.082957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.082970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.083315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.083327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.083703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.083714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.084096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.084107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.084381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.084392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.084678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.084689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.085066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.085077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.085445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.085457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.085831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.085842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.086227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.086253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.086628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.086639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.086960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.086972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.087239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.087251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.087630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.087641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.256 [2024-07-16 00:06:29.087941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.256 [2024-07-16 00:06:29.087952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.256 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.088294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.088305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.088665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.088676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.089023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.089033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.089413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.089425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.089790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.089801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.090181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.090192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.090579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.090591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.090966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.090977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.091352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.091364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.091748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.091760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.092144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.092156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.092493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.092505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.092880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.092891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.093276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.093289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.093653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.093665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.094056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.094066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.094426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.094438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.094805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.094816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.095191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.095201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.095546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.095557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.095903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.095913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.096170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.096179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.096450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.096460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.096721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.096730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.096962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.096972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.097313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.097324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.097695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.097706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.098055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.098066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.098417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.098428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.098784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.098795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.099174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.099187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.099528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.099539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.099890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.099900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.100254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.100265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.100632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.100642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.101010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.101020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.101389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.101400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.101783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.101793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.102167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.102178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.102530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.102542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.102894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.102904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.103201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.103212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.103591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.103601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.103852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.103862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.104235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.104246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.104453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.104463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.104805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.104815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.105021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.105032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.105395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.105406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.105791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.105803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.106140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.106151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.106509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.106520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.106859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.106871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.107221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.107245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.107575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.257 [2024-07-16 00:06:29.107585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.257 qpair failed and we were unable to recover it. 00:30:14.257 [2024-07-16 00:06:29.107934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.107945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.108330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.108340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.108687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.108699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.108955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.108965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.109389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.109400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.109654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.109664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.110012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.110022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.110397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.110408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.110745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.110755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.111094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.111105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.111439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.111449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.111816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.111826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.112208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.112218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.112565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.112576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.112924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.112935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.113239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.113250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.113609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.113620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.114017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.114027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.114371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.114383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.114754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.114764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.115110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.115120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.115454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.115464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.115710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.115720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.116097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.116108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.116483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.116494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.116862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.116873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.117242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.117252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.117580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.117590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.117806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.117816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.118220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.118239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.118559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.118570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.118941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.118951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.119341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.119352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.119600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.119610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.119754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.119766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.120087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.120097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.120446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.120457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.120806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.120816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.121184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.121195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.121455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.121465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.121810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.121821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.122168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.122179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.122515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.122527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.122901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.122912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.123281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.123291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.123570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.123580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.123936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.123946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.124315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.124326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.124676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.124686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.125033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.125043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.125401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.125411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.125741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.125752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.126090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.126101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.126440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.126450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.126799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.126809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.127042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.127052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.127417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.127428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.127777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.127787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.128132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.128142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.128489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.128500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.128891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.128902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.129206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.129218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.129564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.129575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.129949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.129960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.130307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.130318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.130713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.130724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.131070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.258 [2024-07-16 00:06:29.131081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.258 qpair failed and we were unable to recover it. 00:30:14.258 [2024-07-16 00:06:29.131301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.131312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.131678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.131688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.131861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.131872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.132235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.132246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.132571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.132583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.132952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.132963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.133192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.133202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.133544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.133555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.133896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.133908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.134261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.134271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.134451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.134461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.134779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.134790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.135160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.135171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.135494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.135506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.135875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.135886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.136246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.136257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.136581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.136592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.136935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.136947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.137295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.137306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.137669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.137682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.138008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.138020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.138332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.138343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.138702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.138713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.139051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.139062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.139404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.139416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.139783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.139793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.140137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.140148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.140510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.140522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.140892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.140902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.141244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.141255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.141604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.141618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.141967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.141977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.142325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.142337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.142700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.142710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.143021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.143031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.143335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.143354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.143743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.143754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.144144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.144155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.144417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.144429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.144817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.144827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.145203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.145214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.145568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.145579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.145938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.145949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.146293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.146304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.146681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.146691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.147037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.147047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.147396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.147408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.147754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.147764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.148138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.148148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.148493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.148504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.148851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.148862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.149208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.149219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.149562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.149573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.149921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.149932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.150290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.150300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.150646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.150657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.151024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.151034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.151388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.151401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.151759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.151770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.152108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.152119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.152474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.152486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.152838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.152848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.153188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.153200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.153547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.153559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.153931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.153941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.154294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.154304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.154676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.154687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.155040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.155050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.155427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.155439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.155786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.155796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.259 qpair failed and we were unable to recover it. 00:30:14.259 [2024-07-16 00:06:29.156170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.259 [2024-07-16 00:06:29.156182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.156370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.156381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.156718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.156729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.156977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.156987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.157340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.157351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.157701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.157712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.158088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.158098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.158467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.158479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.158824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.158834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.159173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.159184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.159514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.159526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.159872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.159882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.160236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.160247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.160613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.160624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.160999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.161009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.161385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.161395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.161620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.161630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.161976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.161986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.162313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.162325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.162684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.162695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.163009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.163019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.163376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.163387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.163719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.163729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.164066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.164077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.164335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.164346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.164690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.164700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.165069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.165080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.165432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.165443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.165809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.165819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.166165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.166176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.166516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.166528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.166890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.166901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.167254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.167265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.167640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.167650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.168016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.168026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.168381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.168392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.168748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.168759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.169109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.169119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.169559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.169569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.169907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.169917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.170268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.170281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.170630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.170640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.171010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.171020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.171383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.171394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.171739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.171750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.172098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.172109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.172473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.172484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.172839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.172849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.173198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.173209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.173560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.173571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.173950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.173961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.174315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.174326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.174564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.174574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.174923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.174935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.175344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.175356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.175628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.175640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.175998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.176008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.260 [2024-07-16 00:06:29.176341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.260 [2024-07-16 00:06:29.176352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.260 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.176726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.176736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.177082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.177092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.177443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.177454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.177801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.177812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.178179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.178190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.178550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.178561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.178760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.178772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.179132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.179144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.179489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.179501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.179897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.179908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.180155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.180166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.180513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.180525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.180896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.180906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.181256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.181268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.181631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.181642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.181993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.182004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.182203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.182214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.182549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.182560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.182906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.182917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.183256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.183267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.183600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.183610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.183947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.183957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.184305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.184317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.184699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.184709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.185115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.185128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.185484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.185495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.185839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.185850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.186187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.186197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.186539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.186550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.186891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.186902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.187288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.187299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.187535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.187545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.187883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.187893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.188210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.188221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.188523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.188534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.188906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.188917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.189079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.189089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.189337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.189349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.189710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.189721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.190062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.190072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.190417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.190428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.190770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.190780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.191018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.191028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.191375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.191386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.191718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.191729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.192098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.192109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.192460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.192471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.192813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.192824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.193197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.193208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.193563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.193573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.193912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.193923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.194272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.194285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.194515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.194525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.194860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.194870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.195211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.195222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.195637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.195648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.195981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.195991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.196341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.196351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.196695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.196705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.197047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.197058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.197324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.197334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.197705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.197715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.198053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.198063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.198410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.261 [2024-07-16 00:06:29.198422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.261 qpair failed and we were unable to recover it. 00:30:14.261 [2024-07-16 00:06:29.198797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.198808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.199148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.199159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.199484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.199495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.199835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.199845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.200220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.200233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.200572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.200583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.200927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.200939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.201291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.201302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.201507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.201518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.201877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.201887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.202227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.202241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.202434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.202444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.202761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.202772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.203112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.203122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.203486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.203497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.203753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.203763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.204141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.204151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.204550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.204561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.204816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.204827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.205147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.205158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.205305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.205315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.205648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.205659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.206017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.206028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.206403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.206414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.206747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.206757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.207136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.207146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.207359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.207370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.207682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.207693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.208068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.208082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.208425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.208436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.208789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.208800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.209143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.209153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.209500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.209511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.209741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.209751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.210093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.210103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.210431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.210443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.210757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.210768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.211022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.211032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.211381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.211391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.211490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.211500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.211830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.211840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.212208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.212219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.212604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.212615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.212795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.212805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.213008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.213019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.213381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.213392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.213760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.213770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.214187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.214199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.214570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.214580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.214931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.214942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.215252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.215264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.215537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.215548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.215926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.215937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.216316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.216327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.216748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.216759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.216983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.216996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.217342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.217353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.217724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.217735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.218089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.218100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.218358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.218368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.218730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.218740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.219108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.219119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.219490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.219502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.219849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.219860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.262 [2024-07-16 00:06:29.220215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.262 [2024-07-16 00:06:29.220225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.262 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.220591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.220602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.220964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.220975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.221313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.221324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.221662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.221674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.222043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.222055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.222434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.222446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.223430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.223453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.223816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.223828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.224205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.224216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.224589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.224600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.224950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.224961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.225348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.225371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.225722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.225740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.226100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.226111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.226331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.226343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.226717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.226729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.227101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.227112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.227474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.227488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.227843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.227854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.228225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.228240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.228500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.228511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.228852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.228862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.229208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.229219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.229596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.229607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.229902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.229912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.230144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.230154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.230494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.230505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.230877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.230887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.231258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.231270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.231650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.231661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.232030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.232042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.232423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.232433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.232792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.232802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.233047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.233058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.233419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.233430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.233805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.233815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.234191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.234202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.234561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.234572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.234926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.234937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.235362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.235373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.235714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.235724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.236076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.236087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.236439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.236450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.236828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.236839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.237180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.237190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.237617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.237628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.237969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.237980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.238352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.238362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.238723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.238733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.238994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.239004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.239349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.239359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.239606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.239616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.239965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.239977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.240329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.240339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.240727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.240737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.240950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.240961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.241332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.241342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.241584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.241594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.241939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.241950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.242299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.242311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.242568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.242579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.242947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.242957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.243384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.243395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.243761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.243772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.244083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.244093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.244444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.244455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.244787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.244796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.263 [2024-07-16 00:06:29.245143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.263 [2024-07-16 00:06:29.245154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.263 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.245385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.245395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.245754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.245764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.246134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.246144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.246489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.246500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.246876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.246887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.247238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.247249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.247585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.247595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.247947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.247958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.248283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.248293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.248656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.248666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.249016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.249026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.249479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.249489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.249869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.249879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.250235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.250246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.250704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.250714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.251090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.251101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.251565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.251604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.251991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.252008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.252449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.252487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.252858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.252872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.253305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.253316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.253684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.253695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.254049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.254060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.254460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.254471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.254773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.254784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.255137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.255147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.255413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.255424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.255775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.255785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.256148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.256159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.256568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.256582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.256949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.256961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.257369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.257380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.257730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.257741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.258080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.258092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.258488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.258499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.258841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.258852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.259218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.259233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.259593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.259603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.259960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.259971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.260336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.260347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.260595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.260605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.260853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.260863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.261176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.261188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.261511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.261522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.261902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.261916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.262256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.262267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.262592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.262603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.262853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.262863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.263079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.263090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.263471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.263482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.263829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.263839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.264185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.264195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.264373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.264384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.264728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.264738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.264986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.264996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.265352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.265362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.265735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.265745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.266103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.266114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.266491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.266502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.266835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.266845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.267188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.267199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.267568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.267579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.267798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.267807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.268154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.264 [2024-07-16 00:06:29.268165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.264 qpair failed and we were unable to recover it. 00:30:14.264 [2024-07-16 00:06:29.268517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.268529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.268883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.268893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.269255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.269266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.269601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.269611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.269982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.269993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.270372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.270383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.270640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.270650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.270977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.270989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.271372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.271383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.271761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.271771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.272092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.272102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.272453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.272463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.272825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.272836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.273188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.273199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.273562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.273573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.273770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.273782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.274109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.274119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.274471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.274481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.274838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.274859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.275227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.275241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.275651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.275662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.275891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.275902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.276147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.276158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.276417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.276428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.276777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.276788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.277147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.277157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.277522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.277533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.277899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.277910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.278255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.278266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.278580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.278590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.278939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.278950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.279131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.279142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.279495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.279506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.279884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.279894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.280266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.280277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.280636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.280647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.280988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.280998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.281339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.281351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.281724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.281734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.282092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.282102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.282473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.282484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.282858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.282868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.283125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.283135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.283504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.283514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.283863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.283874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.284253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.284265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.284628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.284638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.285022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.285032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.285390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.285403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.285749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.285760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.286104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.286114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.286499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.286509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.286866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.286876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.287237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.287247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.287481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.287491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.287837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.287848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.288198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.288209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.288452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.288462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.288815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.288825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.289174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.289184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.289447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.265 [2024-07-16 00:06:29.289458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.265 qpair failed and we were unable to recover it. 00:30:14.265 [2024-07-16 00:06:29.289821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.289832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.290181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.290193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.290496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.290508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.290870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.290880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.291251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.291262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.291588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.291599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.291947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.291957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.292312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.292323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.292686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.292696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.292913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.292924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.293315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.293326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.293692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.293702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.294117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.294127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.294523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.294533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.294890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.294906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.295261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.295272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.295626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.295636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.295987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.295997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.296348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.296360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.296610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.296620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.296966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.296976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.297327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.297338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.297709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.297719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.298109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.298119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.298391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.298402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.298724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.298734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.299086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.299097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.299450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.299461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.299814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.299825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.300178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.300188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.300552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.300563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.300911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.300921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.301283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.301295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.301640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.301651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.302011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.302021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.302306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.302316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.302678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.302688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.303054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.303064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.303448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.303459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.303812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.303823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.304200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.304211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.304474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.304487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.304845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.304856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.305195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.305206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.305564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.305575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.305928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.305939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.306197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.306207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.306594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.306604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.306919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.306930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.307290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.307301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.307615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.307627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.307977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.307987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.308342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.308353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.308720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.308730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.309128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.309138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.309509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.309521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.309646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.309656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.310017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.310028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.310403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.310414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.310764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.310774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.311194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.311205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.311611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.311622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.311975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.311986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.312343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.312354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.312654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.312664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.312916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.312927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.313308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.266 [2024-07-16 00:06:29.313320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.266 qpair failed and we were unable to recover it. 00:30:14.266 [2024-07-16 00:06:29.313680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.313691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.314025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.314037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.314397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.314409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.314769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.314780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.315039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.315050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.315407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.315417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.315820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.315830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.316113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.316123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.316500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.316510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.316880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.316891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.317260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.317271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.317648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.317658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.317966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.317977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.318338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.318349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.318735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.318746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.319096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.319107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.319456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.319467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.319848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.319858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.320200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.320211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.320514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.320525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.320699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.320709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.320949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.320961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.321211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.321222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.321576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.321587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.321939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.321950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.322335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.322347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.322716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.322727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.323161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.323172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.323485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.323495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.323828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.323840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.323991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.324002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.324242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.324254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.324581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.324592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.324933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.324943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.325293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.325305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.325664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.325674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.326021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.326032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.326351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.326363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.326595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.326605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.326985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.326995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.327245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.327255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.327633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.327643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.327993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.328006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.328356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.328369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.328615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.328625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.329001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.329012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.329359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.329370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.329717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.329728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.330066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.330077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.330433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.330443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.330785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.330795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.331157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.331168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.331486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.331496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.331869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.331880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.332234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.332246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.332595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.332605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.332952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.332962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.333291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.333303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.333704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.333714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.334102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.334113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.334515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.334526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.334838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.334848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.267 [2024-07-16 00:06:29.335192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.267 [2024-07-16 00:06:29.335202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.267 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.335512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.335523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.335709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.335719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.335965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.335975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.336211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.336220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.336570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.336581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.336939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.336950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.337208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.337220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.337494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.337505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.337808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.337819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.338046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.338057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.338409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.338421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.338799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.338809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.339047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.339058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.339391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.339401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.339764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.339776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.340124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.340134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.340452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.340462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.340803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.340813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.341200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.341210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.341494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.341505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.341849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.341860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.342275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.342286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.342641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.342652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.342991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.343001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.343344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.343355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.343725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.343735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.344111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.344121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.344376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.344386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.344674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.344685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.344931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.344941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.345279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.345290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.345611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.345623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.345946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.345957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.346364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.346377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.346742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.346753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.347096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.347107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.347458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.347468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.347832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.347843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.348218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.348228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.348568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.348579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.348791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.348801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.349150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.349160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.349520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.349531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.349903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.349914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.350184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.350194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.350584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.350596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.350931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.350942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.351292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.351311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.351673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.351683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.351972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.351983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.352311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.352321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.352687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.352697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.353045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.353056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.353409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.353420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.353761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.353773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.353997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.354007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.354241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.354251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.354636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.354646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.354966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.354977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.355330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.355340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.355583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.355593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.355944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.355954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.356340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.356351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.356607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.356617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.356955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.356966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.268 qpair failed and we were unable to recover it. 00:30:14.268 [2024-07-16 00:06:29.357320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.268 [2024-07-16 00:06:29.357331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.357628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.357638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.357922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.357932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.358270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.358282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.358652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.358663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.358921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.358931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.359273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.359283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.359655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.359665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.360016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.360026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.360388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.360401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.360770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.360780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.361132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.361142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.361452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.361463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.361875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.361886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.362241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.362252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.362467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.362478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.362718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.362728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.362974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.362984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.363298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.363308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.363668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.363679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.364028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.364039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.364306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.364316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.364674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.364684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.365031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.365041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.365395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.365406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.365759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.365770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.366128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.366139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.366286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.366297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.366550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.366561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.366887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.366898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.367249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.367260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.367681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.367691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.368057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.368067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.368421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.368432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.368693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.368703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.369059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.369069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.369271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.369285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.369590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.369600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.369947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.369959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.370310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.370321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.370698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.370708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.371073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.371084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.371433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.371445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.371796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.371806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.372154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.372164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.372411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.372422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.372706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.372715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.372961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.372970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.373340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.373349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.373705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.373714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.374091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.374100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.374493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.374503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.374857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.374865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.375107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.375116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.375381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.375390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.375719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.375728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.376142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.376152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.376437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.376448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.376795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.376806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.377155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.377166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.377502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.377513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.377824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.377835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.378190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.378201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.378383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.378397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.378765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.378777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.379151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.379162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.379604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.379615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.379877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.379889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.380227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.380243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.380701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.380713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.381055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.381066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.269 [2024-07-16 00:06:29.381319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.269 [2024-07-16 00:06:29.381331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.269 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.381681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.381693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.382051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.382063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.382418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.382430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.382784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.382796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.383145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.383157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.383373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.383386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.383649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.383661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.383902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.383914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.384276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.384288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.384642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.384654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.385030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.385042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.385414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.385425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.385776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.385787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.386143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.386154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.386510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.386521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.386869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.386880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.387234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.387246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.387570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.387582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.387858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.387869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.388218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.388240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.388573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.388584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.388958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.388969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.389347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.389358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.389721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.389732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.390102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.390112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.390470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.390481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.390838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.390849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.391204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.391214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.391588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.391600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.391949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.391960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.392310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.392321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.392682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.392693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.393114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.393125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.393479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.393489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.393856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.393868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.394207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.394218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.394574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.394585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.394928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.394938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.395278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.395289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.395671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.395681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.395898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.395909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.396253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.396264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.396602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.396614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.396966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.396976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.397743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.397766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.398116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.398128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.398462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.398474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.398828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.398839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.399092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.399103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.399452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.399463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.399834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.399846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.400195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.400206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.400557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.400568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.400813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.400823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.401208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.401220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.401444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.401456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.270 qpair failed and we were unable to recover it. 00:30:14.270 [2024-07-16 00:06:29.401780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.270 [2024-07-16 00:06:29.401791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.402137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.402148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.402495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.402506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.402896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.402909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.403216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.403227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.403478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.403489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.403861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.403873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.404225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.404244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.405130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.405151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.405510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.405523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.406252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.406273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.406631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.406642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.407018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.407030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.407410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.407421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.407769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.407780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.408130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.408140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.408485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.408496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.408842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.408853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.409213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.409224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.409583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.409594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.409870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.409881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.410255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.410266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.410641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.410651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.411061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.411072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.411415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.411426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.412079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.412099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.412403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.412415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.412767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.412777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.413151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.413161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.413408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.413419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.413710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.413723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.414066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.414076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.414323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.414333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.414608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.414618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.414974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.414984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.415311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.415323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.415544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.415554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.415743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.415752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.416098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.416109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.416479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.416491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.416854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.416864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.417202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.417212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.417543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.417554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.417894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.417905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.418284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.418296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.418703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.418713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.419053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.419064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.419355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.419365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.419698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.419709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.419892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.419902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.420136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.420147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.420499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.420510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.420842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.420853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.421198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.421209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.421576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.421587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.421935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.421946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.422278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.422288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.422661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.422671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.423015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.423027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.423330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.423341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.423575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.423585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.423909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.423919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.424260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.424271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.424505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.424515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.424850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.424861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.425020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.425030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.425258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.425268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.425414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.425424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.425678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.425688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.425991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.271 [2024-07-16 00:06:29.426001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.271 qpair failed and we were unable to recover it. 00:30:14.271 [2024-07-16 00:06:29.426341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.272 [2024-07-16 00:06:29.426352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.272 qpair failed and we were unable to recover it. 00:30:14.272 [2024-07-16 00:06:29.426715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.272 [2024-07-16 00:06:29.426725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.272 qpair failed and we were unable to recover it. 00:30:14.272 [2024-07-16 00:06:29.427104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.272 [2024-07-16 00:06:29.427115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.272 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.427472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.427484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.427717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.427729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.428113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.428124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.428446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.428458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.428801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.428811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.429155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.429167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.429520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.429532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.429901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.429912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.430262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.430273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.430613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.430624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.430973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.430984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.431315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.431325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.431685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.431696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.432043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.432055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.432420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.432432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.432808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.432819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.433170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.433181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.433549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.433560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.433914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.433925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.434297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.434309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.434686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.434696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.435051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.549 [2024-07-16 00:06:29.435061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.549 qpair failed and we were unable to recover it. 00:30:14.549 [2024-07-16 00:06:29.435413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.435424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.435683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.435693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.436052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.436062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.436412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.436426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.436776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.436787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.437168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.437178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.437440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.437450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.437805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.437816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.438164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.438174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.438528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.438540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.438882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.438892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.439243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.439254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.439629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.439639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.439980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.439990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.440341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.440352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.440802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.440813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.441162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.441172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.441442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.441453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.441805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.441815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.442281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.442292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.442640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.442651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.442996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.443006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.443165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.443175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.443380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.443391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.443771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.443781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.444044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.550 [2024-07-16 00:06:29.444054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.550 qpair failed and we were unable to recover it. 00:30:14.550 [2024-07-16 00:06:29.444404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.444415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.444841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.444853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.445242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.445254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.445682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.445694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.446033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.446046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.446482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.446493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.446862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.446873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.447248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.447259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.447587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.447597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.447851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.447861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.448241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.448251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.448622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.448633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.448989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.448999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.449345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.449356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.449692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.449703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.449920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.449931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.450283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.450293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.450647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.450657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.451015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.451025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.451236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.451247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.451603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.451614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.452033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.452044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.452287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.452297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.452613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.452623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.453016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.453027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.551 qpair failed and we were unable to recover it. 00:30:14.551 [2024-07-16 00:06:29.453226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.551 [2024-07-16 00:06:29.453242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.453629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.453639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.453975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.453985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.454392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.454402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.454761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.454773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.455122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.455132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.455533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.455546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.455898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.455908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.456162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.456172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.456374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.456384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.456738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.456749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.457125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.457136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.457507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.457518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.457868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.457878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.458248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.458259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.458635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.458646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.458995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.459005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.459343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.459355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.459559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.459570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.459882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.459892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.460238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.460249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.460608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.460619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.461008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.461018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.461389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.461400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.461641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.461652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.461983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.461994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.462209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.462219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.462486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.462497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.552 [2024-07-16 00:06:29.462845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.552 [2024-07-16 00:06:29.462855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.552 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.463209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.463220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.463571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.463582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.463932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.463942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.464187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.464197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.464436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.464447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.464822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.464833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.465189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.465199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.465494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.465504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.465868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.465879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.466162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.466174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.466520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.466531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.466925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.466937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.467286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.467296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.467700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.467710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.467924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.467934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.468301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.468312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.468602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.468612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.468989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.468999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.469348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.469359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.469709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.469720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.470098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.470108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.470517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.470529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.470862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.470872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.471038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.471048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.471380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.471391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.471736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.471747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.472194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.472204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.472554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.553 [2024-07-16 00:06:29.472565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.553 qpair failed and we were unable to recover it. 00:30:14.553 [2024-07-16 00:06:29.472833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.472843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.473217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.473228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.473573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.473583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.474016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.474027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.474264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.474275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.474654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.474664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.475016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.475027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.475369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.475380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.475735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.475746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.476116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.476127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.476358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.476369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.476732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.476742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.476972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.476983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.477322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.477333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.477706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.477717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.478066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.478077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.478428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.478439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.478818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.478833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.479180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.479191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.479540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.479551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.479894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.479904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.480263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.480274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.480623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.480633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.480980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.480990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.481282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.481292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.481653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.481663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.482010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.554 [2024-07-16 00:06:29.482021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.554 qpair failed and we were unable to recover it. 00:30:14.554 [2024-07-16 00:06:29.482408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.482420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.482821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.482832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.483164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.483174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.483506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.483517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.483901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.483912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.484264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.484275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.484651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.484661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.485013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.485025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.485328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.485338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.485708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.485718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.486087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.486097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.486420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.486431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.486775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.486785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.487133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.487144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.487379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.487391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.487746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.487756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.488105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.488116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.488496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.488509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.488887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.488899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.489251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.489262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.555 qpair failed and we were unable to recover it. 00:30:14.555 [2024-07-16 00:06:29.489668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.555 [2024-07-16 00:06:29.489680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.490054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.490065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.490426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.490437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.490696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.490707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.491077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.491089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.491412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.491424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.491709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.491721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.492078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.492089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.492450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.492462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.492795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.492806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.493157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.493167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.493442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.493454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.493809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.493820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.494170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.494181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.494577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.494589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.494918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.494929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.495278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.495289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.495597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.495608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.495764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.495776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.496115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.496126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.496471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.496483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.496858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.496870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.497148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.497159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.497518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.497530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.497883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.497895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.498263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.498274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.498597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.498608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.556 [2024-07-16 00:06:29.498986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.556 [2024-07-16 00:06:29.498997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.556 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.499342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.499354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.499707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.499718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.500094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.500105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.500528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.500538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.500879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.500890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.501198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.501210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.501561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.501572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.501932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.501943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.502285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.502297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.502625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.502635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.502995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.503006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.503279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.503289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.503691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.503701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.504118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.504129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.504537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.504548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.504904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.504915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.505260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.505270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.505627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.505638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.505892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.505902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.506193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.506203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.506568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.506579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.506813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.506825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.507201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.507212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.507604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.507615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.507944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.507955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.508309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.508321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.508584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.508595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.508935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.557 [2024-07-16 00:06:29.508945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.557 qpair failed and we were unable to recover it. 00:30:14.557 [2024-07-16 00:06:29.509283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.509295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.509651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.509661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.509905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.509916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.510157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.510168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.510441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.510451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.510820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.510832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.511201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.511212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.511574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.511585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.511932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.511942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.512238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.512252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.512490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.512500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.512881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.512892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.513191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.513202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.513536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.513548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.513828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.513839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.513936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.513947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.514447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.514477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.514849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.514858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.515187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.515196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.515695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.515723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.516078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.516087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.516500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.516529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.516905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.516914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.517145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.517153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.517520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.517529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.517881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.558 [2024-07-16 00:06:29.517890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.558 qpair failed and we were unable to recover it. 00:30:14.558 [2024-07-16 00:06:29.518250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.518258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.518672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.518680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.519029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.519036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.519226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.519239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.519360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.519369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.519541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.519550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.519801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.519809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.520161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.520170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.520526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.520535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.520907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.520914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.521260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.521270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.521647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.521655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.521984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.521994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.522316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.522324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.522722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.522730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.523067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.523075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.523437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.523444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.523685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.523693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.524069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.524077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.524345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.524353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.524714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.524723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.525070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.525078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.525352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.525360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.525717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.525727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.525920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.525930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.526152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.559 [2024-07-16 00:06:29.526160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.559 qpair failed and we were unable to recover it. 00:30:14.559 [2024-07-16 00:06:29.526402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.526410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.526783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.526792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.527119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.527128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.527524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.527533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.527899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.527907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.528257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.528266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.528598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.528606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.528922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.528931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.529277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.529285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.529689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.529696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.529995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.530002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.530348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.530356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.530723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.530732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.531081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.531088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.531352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.531360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.531706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.531714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.531964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.531971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.532311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.532319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.532666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.532674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.533021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.533029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.533279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.533287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.533635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.533643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.533986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.533994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.534303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.534311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.534681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.534688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.535035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.535044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.560 qpair failed and we were unable to recover it. 00:30:14.560 [2024-07-16 00:06:29.535493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.560 [2024-07-16 00:06:29.535501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.535837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.535845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.536245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.536252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.536571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.536579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.536927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.536935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.537280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.537289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.537694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.537702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.538043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.538050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.538361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.538369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.538719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.538726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.539069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.539077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.539423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.539434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.539782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.539790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.540135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.540142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.540381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.540389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.540731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.540739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.541097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.541105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.541496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.541505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.541841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.541850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.542195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.542203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.542561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.542570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.542917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.542925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.543178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.543186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.543593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.543602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.543902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.543911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.544257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.544266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.544563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.561 [2024-07-16 00:06:29.544571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.561 qpair failed and we were unable to recover it. 00:30:14.561 [2024-07-16 00:06:29.544915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.544923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.545267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.545275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.545643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.545650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.546017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.546026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.546480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.546488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.546824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.546832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.547176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.547185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.547515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.547522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.547871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.547879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.548223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.548234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.548498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.548506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.548699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.548708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.549072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.549080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.549426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.549434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.549787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.549795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.550163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.550171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.550500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.550508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.550855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.550863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.551120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.551128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.562 [2024-07-16 00:06:29.551507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.562 [2024-07-16 00:06:29.551516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.562 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.551861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.551868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.552232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.552240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.552577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.552586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.552953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.552963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.553307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.553317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.553575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.553582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.553937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.553945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.554315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.554323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.554676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.554684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.555027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.555036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.555382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.555390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.555789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.555796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.556149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.556157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.556402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.556410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.556750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.556757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.557128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.557136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.557489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.557497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.557847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.557855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.558207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.558215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.558585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.558594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.559017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.559026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.559400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.559408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.559754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.559762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.560131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.560139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.560491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.560498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.560842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.560850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.561197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.561206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.563 [2024-07-16 00:06:29.561549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.563 [2024-07-16 00:06:29.561559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.563 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.561910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.561918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.562261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.562270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.562563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.562571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.562937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.562945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.563290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.563299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.563641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.563649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.563995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.564003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.564371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.564379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.564726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.564734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.565098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.565107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.565528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.565536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.565860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.565868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.566215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.566222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.566563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.566571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.566621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.566628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.566940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.566948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.567215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.567225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.567620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.567628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.567973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.567982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.568338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.568346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.568677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.568685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.569066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.569074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.569419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.569427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.569769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.569778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.570147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.570155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.570500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.570509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.570858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.570866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.571211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.564 [2024-07-16 00:06:29.571218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.564 qpair failed and we were unable to recover it. 00:30:14.564 [2024-07-16 00:06:29.571585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.571593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.571985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.571992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.572341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.572349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.572717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.572725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.573084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.573092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.573432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.573441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.573791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.573799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.574149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.574158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.574499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.574507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.574844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.574853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.575206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.575215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.575435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.575444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.575787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.575796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.576058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.576066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.576278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.576286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.576544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.576551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.576916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.576924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.577269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.577277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.577623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.577630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.578001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.578008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.578353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.578362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.578732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.578741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.579092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.579100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.579446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.579454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.579818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.579826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.580012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.580019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.565 [2024-07-16 00:06:29.580332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.565 [2024-07-16 00:06:29.580342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.565 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.580700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.580708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.581060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.581070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.581417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.581426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.581780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.581787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.582139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.582147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.582410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.582418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.582757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.582765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.583110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.583118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.583468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.583475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.583848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.583855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.584203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.584212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.584563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.584571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.584938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.584947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.585314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.585323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.585669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.585676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.586023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.586031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.586376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.586385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.586753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.586761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.587132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.587140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.587491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.587500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.587919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.587927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.588252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.588261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.588594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.588602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.588949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.588957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.589303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.589312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.589653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.566 [2024-07-16 00:06:29.589661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.566 qpair failed and we were unable to recover it. 00:30:14.566 [2024-07-16 00:06:29.590010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.590017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.590362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.590370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.590727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.590735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.591101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.591110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.591337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.591345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.591683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.591691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.591903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.591911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.592279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.592287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.592513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.592521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.592760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.592769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.593107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.593115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.593460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.593468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.593812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.593821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.594215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.594222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.594562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.594571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.594898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.594907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.595251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.595259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.595608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.595615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.595959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.595967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.596335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.596343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.596747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.596754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.597104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.597112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.597541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.597550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.597887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.597896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.598241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.598249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.598601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.598609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.567 [2024-07-16 00:06:29.598956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.567 [2024-07-16 00:06:29.598965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.567 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.599331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.599339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.599728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.599737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.600085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.600094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.600445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.600453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.600772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.600780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.600973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.600982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.601286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.601294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.601613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.601620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.601986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.601993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.602337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.602346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.602701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.602709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.603095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.603103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.603463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.603472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.603841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.603848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.604192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.604201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.604602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.604611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.604977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.604985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.605331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.605339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.605691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.605700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.606052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.606060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.606433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.568 [2024-07-16 00:06:29.606442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.568 qpair failed and we were unable to recover it. 00:30:14.568 [2024-07-16 00:06:29.606652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.606660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.606975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.606983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.607333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.607341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.607675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.607683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.608029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.608036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.608380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.608388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.608732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.608739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.609114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.609121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.609491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.609499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.609844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.609852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.610197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.610205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.610602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.610610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.610940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.610948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.611293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.611301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.611665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.611673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.612038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.612046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.612391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.612399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.612749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.612757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.613180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.613187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.613525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.613533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.613878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.613886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.614233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.614241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.614592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.614600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.614972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.614980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.615326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.615334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.615664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.615672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.569 [2024-07-16 00:06:29.616087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.569 [2024-07-16 00:06:29.616095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.569 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.616424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.616432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.616777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.616784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.617207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.617214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.617553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.617561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.617933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.617941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.618287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.618294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.618607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.618615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.618960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.618970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.619341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.619349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.619693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.619702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.620005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.620012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.620369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.620377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.620738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.620747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.621098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.621106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.621309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.621317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.621741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.621749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.622108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.622115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.622481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.622489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.622845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.622852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.623194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.623202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.623573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.623581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.623890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.623897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.624243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.624251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.624594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.624602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.624967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.624975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.625323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.625330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.625678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.625687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.626035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.570 [2024-07-16 00:06:29.626042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.570 qpair failed and we were unable to recover it. 00:30:14.570 [2024-07-16 00:06:29.626301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.626309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.626549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.626556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.626896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.626905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.627251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.627260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.627604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.627612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.627947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.627955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.628207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.628215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.628553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.628561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.628892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.628900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.629091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.629100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.629433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.629441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.629683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.629690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.630060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.630068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.630413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.630421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.630773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.630782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.631130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.631138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.631326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.631334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.631646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.631655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.631997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.632004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.632354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.632365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.632745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.632754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.633106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.633114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.633446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.633454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.633816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.633824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.571 [2024-07-16 00:06:29.634186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.571 [2024-07-16 00:06:29.634195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.571 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.634530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.634538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.634883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.634892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.635238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.635246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.635698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.635706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.636051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.636059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.636440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.636448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.636794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.636803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.637169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.637177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.637527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.637535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.637878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.637886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.638236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.638243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.638592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.638599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.639004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.639012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.639446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.639475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.639727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.639737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.640058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.640066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.640422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.640431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.640779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.640788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.641139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.641147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.641493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.641502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.641853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.641860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.642122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.642129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.642502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.642510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.642878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.642887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.643143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.643152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.643497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.572 [2024-07-16 00:06:29.643506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.572 qpair failed and we were unable to recover it. 00:30:14.572 [2024-07-16 00:06:29.643850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.643859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.644223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.644239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.644603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.644611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.644957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.644965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.645314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.645323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.645704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.645713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.646055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.646063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.646409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.646418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.646767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.646778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.647143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.647151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.647504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.647513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.647860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.647867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.648213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.648221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.648564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.648572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.648762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.648770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.649198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.649206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.649542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.649550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.649920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.649928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.650269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.650278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.650634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.650642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.650991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.650999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.651366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.651373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.651607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.651615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.651959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.651967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.652155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.652163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.652507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.652515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.652861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.573 [2024-07-16 00:06:29.652868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.573 qpair failed and we were unable to recover it. 00:30:14.573 [2024-07-16 00:06:29.653213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.653221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.653558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.653567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.653932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.653939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.654125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.654133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.654452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.654460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.654651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.654658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.654986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.654995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.655412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.655420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.655632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.655640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.655983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.655992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.656362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.656371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.656718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.656726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.657070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.657078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.657422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.657430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.657832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.657839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.658177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.658186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.658571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.658580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.658925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.658933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.659184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.659192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.659534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.659542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.659888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.659896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.660245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.660255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.660603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.660611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.660955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.660963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.661313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.661321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.661676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.661684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.662056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.662065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.662492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.662500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.574 qpair failed and we were unable to recover it. 00:30:14.574 [2024-07-16 00:06:29.662871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.574 [2024-07-16 00:06:29.662880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.663233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.663242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.663594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.663603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.663947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.663955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.664301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.664310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.664656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.664664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.664995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.665003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.665346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.665355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.665700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.665707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.666058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.666067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.666672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.666690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.667061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.667070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.667419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.667427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.667776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.667783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.668079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.668088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.668286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.668294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.668533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.668541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.668907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.668916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.669278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.669286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.669647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.575 [2024-07-16 00:06:29.669654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.575 qpair failed and we were unable to recover it. 00:30:14.575 [2024-07-16 00:06:29.670002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.670010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.670220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.670231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.670581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.670589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.670942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.670950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.671203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.671211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.671566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.671574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.671945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.671953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.672302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.672310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.672678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.672685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.672898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.672906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.673278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.673285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.673479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.673487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.673792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.673800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.674143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.674153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.674502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.674511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.674817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.674825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.675168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.675177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.675510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.675518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.675891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.675898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.676331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.676339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.676739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.676747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.677192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.677201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.677432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.677440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.677783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.677791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.678080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.678088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.576 qpair failed and we were unable to recover it. 00:30:14.576 [2024-07-16 00:06:29.678441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.576 [2024-07-16 00:06:29.678449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.678801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.678810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.679162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.679170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.679507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.679515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.679868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.679876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.680222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.680232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.680537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.680545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.680883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.680891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.681305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.681313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.681663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.681671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.682038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.682046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.682397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.682404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.682763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.682770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.683137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.683145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.683395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.683403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.683777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.683785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.684132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.684140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.684462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.684470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.684803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.684811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.685162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.685170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.685537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.685546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.685962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.685970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.686306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.686314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.686669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.686677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.686894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.686902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.687271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.687279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.687630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.687638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.687977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.687984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.577 [2024-07-16 00:06:29.688339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.577 [2024-07-16 00:06:29.688349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.577 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.688722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.688730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.689074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.689082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.689448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.689456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.689507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.689515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.689886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.689894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.690225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.690235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.690642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.690649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.690990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.690999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.691348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.691356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.691681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.691689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.692069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.692077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.692427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.692436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.692626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.692636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.692978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.692986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.693331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.693340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.693768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.693775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.694156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.694162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.694314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.694324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.694660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.694669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.695065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.695073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.695414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.695421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.695644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.695652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.695990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.695997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.696190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.696197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.696566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.696574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.578 [2024-07-16 00:06:29.696979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.578 [2024-07-16 00:06:29.696987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.578 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.697338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.697346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.697692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.697700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.697971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.697978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.698289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.698297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.698576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.698584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.698915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.698923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.699290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.699297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.699551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.699559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.699902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.699910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.700098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.700106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.700532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.700540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.700923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.700930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.701307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.701316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.701665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.701675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.702032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.702039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.702233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.702241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.702522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.702531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.702748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.702756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.703171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.703180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.703601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.703609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.703815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.703823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.704026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.704034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.704298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.704306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.704570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.704577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.704751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.704758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.705006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.705015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.705245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.579 [2024-07-16 00:06:29.705253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.579 qpair failed and we were unable to recover it. 00:30:14.579 [2024-07-16 00:06:29.705427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.705434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.705695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.705702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.705934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.705942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.706156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.706163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.706522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.706530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.706777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.706784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.707128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.707136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.707476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.707484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.707855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.707863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.708202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.708211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.708410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.708418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.708778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.708786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.709113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.709121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.709473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.709482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.709847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.709856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.710212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.710219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.710485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.710492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.710839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.710846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.711193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.711201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.711555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.711564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.711926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.711934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.712334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.712342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.712686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.712694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.713050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.713058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.713398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.713405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.713709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.713716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.714146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.580 [2024-07-16 00:06:29.714154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.580 qpair failed and we were unable to recover it. 00:30:14.580 [2024-07-16 00:06:29.714475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.714483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.714853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.714862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.715210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.715218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.715569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.715577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.715923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.715931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.716352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.716360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.716635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.716643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.716892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.716900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.717248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.717256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.717661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.717669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.718054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.718062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.718418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.718427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.718771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.718780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.719105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.719114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.719480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.719488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.719835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.719844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.720108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.720116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.720524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.720532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.720885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.720893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.721242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.721250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.721535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.721543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.581 [2024-07-16 00:06:29.721920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.581 [2024-07-16 00:06:29.721927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.581 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.722275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.722284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.722657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.722665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.723016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.723024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.723212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.723219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.723567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.723575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.723943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.723950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.724314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.724322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.724709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.724716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.725074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.725083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.582 [2024-07-16 00:06:29.725461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.582 [2024-07-16 00:06:29.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.582 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.725818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.725827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.726150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.726159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.726505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.726513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.726867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.726875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.727225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.727237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.727601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.727608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.727820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.727828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.728237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.728246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.728576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.728586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.728841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.728849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.729190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.729197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.729581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.729589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.729940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.729947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.730163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.730171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.730552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.730560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.730858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.730865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.731249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.856 [2024-07-16 00:06:29.731258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.856 qpair failed and we were unable to recover it. 00:30:14.856 [2024-07-16 00:06:29.731579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.731587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.731934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.731942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.732279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.732288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.732696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.732704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.733051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.733059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.733318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.733327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.733710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.733718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.734087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.734095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.734526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.734533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.734835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.734842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.735197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.735205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.735453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.735461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.735802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.735811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.736160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.736169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.736562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.736570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.736921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.736930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.737136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.737144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.737403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.737412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.737763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.737772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.738124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.738131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.738507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.738515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.738867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.738874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.739134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.739142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.739557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.739565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.739914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.739922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.740170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.740178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.740511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.740519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.740864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.740872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.741157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.741164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.741576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.741583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.741932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.741942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.742291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.857 [2024-07-16 00:06:29.742300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.857 qpair failed and we were unable to recover it. 00:30:14.857 [2024-07-16 00:06:29.742644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.742651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.743000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.743007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.743345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.743353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.743704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.743712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.744115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.744122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.744469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.744478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.744840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.744848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.745195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.745203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.745552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.745560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.745908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.745915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.746263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.746271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.746579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.746587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.746885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.746893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.747238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.747246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.747591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.747599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.747940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.747949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.748284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.748292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.748709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.748717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.749009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.749016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.749232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.749240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.749559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.749567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.749950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.749957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.750306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.750314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.750686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.750694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.751031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.751038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.751382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.751390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.751735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.751744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.751964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.751971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.752316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.752325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.752682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.752689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.752979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.752987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.753209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.753217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.753471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.858 [2024-07-16 00:06:29.753479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.858 qpair failed and we were unable to recover it. 00:30:14.858 [2024-07-16 00:06:29.753839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.753848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.754228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.754238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.754487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.754494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.754832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.754840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.755187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.755195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.755550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.755560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.755922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.755930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.756152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.756159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.756499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.756508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.756865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.756873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.757224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.757234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.757571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.757578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.757973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.757981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.758324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.758331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.758727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.758735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.759075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.759083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.759371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.759378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.759759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.759766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.760065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.760072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.760301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.760309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.760675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.760682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.760979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.760986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.761341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.761349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.761544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.761551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.761760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.761767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.762128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.762135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.762491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.762499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.762862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.762871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.763219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.763227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.763668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.763676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.764023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.764031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.764388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.764396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.859 qpair failed and we were unable to recover it. 00:30:14.859 [2024-07-16 00:06:29.764820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.859 [2024-07-16 00:06:29.764828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.765185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.765193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.765362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.765370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.765733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.765742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.766122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.766130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.766326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.766334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.766691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.766699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.766947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.766955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.767306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.767320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.767590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.767597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.767874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.767882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.768223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.768234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.768444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.768452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.768812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.768822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.769063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.769071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.769444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.769452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.769829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.769838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.770177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.770184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.770472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.770481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.770839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.770846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.771202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.771210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.771476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.771484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.771828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.771838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.772085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.772092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.772452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.772460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.772771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.772780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.772971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.772979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.773303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.773310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.773650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.773658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.774044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.774051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.774399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.774408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.774621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.774628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.774976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.860 [2024-07-16 00:06:29.774983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.860 qpair failed and we were unable to recover it. 00:30:14.860 [2024-07-16 00:06:29.775324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.775332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.775662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.775669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.776032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.776040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.776308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.776316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.776690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.776698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.776904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.776912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.777255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.777264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.777446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.777454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.777822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.777829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.778128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.778136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.778372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.778380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.778739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.778747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.778908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.778916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.779257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.779265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.779741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.779749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.779997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.780004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.780333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.780340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.780643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.780652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.781001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.781009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.781363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.781372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.781709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.781718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.782064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.782071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.782426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.782433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.782684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.782692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.782917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.782925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.783270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.783278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.783624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.783632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.861 [2024-07-16 00:06:29.783930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.861 [2024-07-16 00:06:29.783937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.861 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.784292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.784299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.784650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.784658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.784895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.784903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.785248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.785256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.785576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.785583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.785926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.785934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.786263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.786272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.786602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.786609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.786956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.786964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.787197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.787205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.787397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.787404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.787782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.787791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.788029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.788037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.788353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.788361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.788728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.788736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.789075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.789083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.789513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.789520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.789726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.789734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.790121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.790129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.790496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.790506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.790733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.790740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.791080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.791088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.791441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.791449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.791789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.791797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.792045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.792052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.792466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.792473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.792871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.792879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.793258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.793267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.793670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.793677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.793889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.793896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.794136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.794144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.794537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.794544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.862 qpair failed and we were unable to recover it. 00:30:14.862 [2024-07-16 00:06:29.794894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.862 [2024-07-16 00:06:29.794902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.795254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.795262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.795587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.795595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.795974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.795982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.796206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.796214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.796559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.796567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.796801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.796809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.797151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.797158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.797430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.797437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.797671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.797678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.797931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.797938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.798272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.798280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.798551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.798558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.798907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.798915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.799266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.799275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.799632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.799640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.799989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.799997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.800309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.800316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.800678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.800685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.801027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.801035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.801384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.801392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.801775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.801782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.802128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.802137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.802343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.802351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.802736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.802744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.802989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.802997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.803427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.803435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.803766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.803776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.804079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.804086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.804444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.804452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.804794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.804802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.805188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.805197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.805489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.863 [2024-07-16 00:06:29.805498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.863 qpair failed and we were unable to recover it. 00:30:14.863 [2024-07-16 00:06:29.805908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.805915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.806282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.806291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.806646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.806654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.807017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.807025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.807365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.807373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.807712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.807721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.808058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.808066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.808419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.808427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.808774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.808781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.809020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.809028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.809251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.809260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.809596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.809604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.809833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.809840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.810216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.810224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.810591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.810599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.810958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.810967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.811402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.811410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.811728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.811736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.812070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.812077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.812437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.812445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.812869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.812877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.813223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.813233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.813578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.813586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.813810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.813818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.814113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.814121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.814463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.814471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.814830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.814839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.815200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.815208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.815470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.815478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.815685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.815693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.816031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.816039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.816392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.816401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.816762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.864 [2024-07-16 00:06:29.816770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.864 qpair failed and we were unable to recover it. 00:30:14.864 [2024-07-16 00:06:29.817122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.817130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.817512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.817522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.817869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.817877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.818235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.818243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.818603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.818611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.818838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.818845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.819067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.819075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.819436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.819445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.819785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.819792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.820149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.820158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.820521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.820529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.820875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.820883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.821315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.821323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.821557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.821565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.821918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.821926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.822275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.822283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.822617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.822626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.822959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.822967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.823324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.823332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.823692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.823699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.823996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.824004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.824336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.824343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.824735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.824743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.824920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.824928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.825293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.825300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.825688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.825695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.826085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.826094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.826444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.826452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.826824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.826832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.827166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.827174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.827507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.827515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.827860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.827869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.828216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.865 [2024-07-16 00:06:29.828223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.865 qpair failed and we were unable to recover it. 00:30:14.865 [2024-07-16 00:06:29.828501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.828508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.828851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.828859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.829210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.829219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.829577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.829585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.829963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.829971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.830108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.830114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.830528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.830536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.830883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.830891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.831284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.831293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.831659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.831668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.832018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.832026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.832377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.832386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.832726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.832734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.833079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.833087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.833349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.833357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.833586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.833594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.834006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.834013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.834364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.834373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.834762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.834770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.835111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.835120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.835369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.835376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.835730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.835737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.836081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.836089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.836342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.836350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.836705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.836712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.837058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.837067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.837420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.837427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.837685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.837693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.837905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.837913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.838284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.838292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.838642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.838650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.866 qpair failed and we were unable to recover it. 00:30:14.866 [2024-07-16 00:06:29.838999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.866 [2024-07-16 00:06:29.839008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.839363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.839370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.839720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.839728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.839918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.839926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.840171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.840180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.840563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.840571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.840920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.840928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.841280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.841288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.841562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.841570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.841934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.841941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.842292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.842300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.842608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.842615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.842976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.842984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.843371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.843379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.843728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.843735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.844084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.844091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.844443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.844452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.844796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.844805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.845154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.845162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.845474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.845481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.845669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.845677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.846014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.846021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.846396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.846404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.846749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.846756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.847100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.847108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.847471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.847478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.847860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.847868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.848117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.848125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.848498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.848506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.848851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.848858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.867 qpair failed and we were unable to recover it. 00:30:14.867 [2024-07-16 00:06:29.849207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.867 [2024-07-16 00:06:29.849215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.849533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.849541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.849960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.849967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.850235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.850244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.850623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.850631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.850984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.850993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.851345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.851353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.851696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.851704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.852058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.852065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.852295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.852302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.852620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.852628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.852968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.852975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.853327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.853335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.853697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.853705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.854092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.854100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.854450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.854458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.854806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.854813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.855162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.855170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.855404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.855413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.855618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.855625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.855985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.855993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.856389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.856397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.856731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.856738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.857108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.857115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.857471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.857479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.857825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.857833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.857970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.857976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.858314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.858324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.858691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.858699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.859121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.859129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.859499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.859507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.859880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.859887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.860227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.868 [2024-07-16 00:06:29.860237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.868 qpair failed and we were unable to recover it. 00:30:14.868 [2024-07-16 00:06:29.860591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.860599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.860824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.860831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.860980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.860988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.861352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.861359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.861739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.861746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.862087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.862094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.862444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.862451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.862887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.862894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.863185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.863192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.863446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.863454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.863787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.863795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.864149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.864156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.864371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.864379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.864727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.864735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.865064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.865072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.865429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.865437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.865783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.865790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.866078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.866086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.866460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.866468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.866726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.866733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.867056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.867065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.867300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.867308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.867669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.867676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.867891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.867899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.868088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.868097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.868344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.868352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.868733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.868741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.868944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.868952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.869197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.869205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.869543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.869551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.869877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.869884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.870238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.870246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.870426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.869 [2024-07-16 00:06:29.870434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.869 qpair failed and we were unable to recover it. 00:30:14.869 [2024-07-16 00:06:29.870813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.870820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.871066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.871075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.871448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.871457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.871750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.871757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.872107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.872114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.872375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.872383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.872676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.872683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.873032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.873040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.873214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.873222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.873634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.873642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.873900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.873908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.874292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.874300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.874734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.874742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.875103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.875111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.875557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.875565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.875911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.875919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.876162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.876169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.876510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.876517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.876882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.876889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.877245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.877253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.877603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.877610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.877852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.877859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.878228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.878238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.878594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.878602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.878963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.878970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.879353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.879361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.879730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.879737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.879940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.879948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.880303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.880310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.880682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.880689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.881040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.881047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.881428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.881436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.870 [2024-07-16 00:06:29.881829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.870 [2024-07-16 00:06:29.881836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.870 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.882056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.882065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.882419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.882427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.882779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.882786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.883187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.883195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.883544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.883552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.883898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.883906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.884147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.884155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.884488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.884497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.884717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.884725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.885074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.885081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.885431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.885438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.885784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.885791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.886126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.886134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.886495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.886503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.886865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.886872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.887173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.887180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.887484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.887492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.887837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.887845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.888195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.888203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.888568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.888577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.888916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.888923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.889264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.889272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.889601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.889609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.889958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.889966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.890306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.890314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.890658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.890666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.891004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.891011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.891251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.891260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.891525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.891533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.891822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.891831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.892177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.892185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.892550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.892558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.871 [2024-07-16 00:06:29.892781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.871 [2024-07-16 00:06:29.892789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.871 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.893008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.893017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.893389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.893397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.893784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.893792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.894170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.894177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.894460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.894468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.894679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.894688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.895052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.895060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.895496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.895504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.895844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.895851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.896193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.896200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.896554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.896562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.896940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.896947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.897300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.897308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.897704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.897712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.898064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.898072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.898440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.898449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.898796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.898804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.899148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.899155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.899198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.899205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.899500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.899508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.899883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.899891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.900220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.900227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.900495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.900502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.900843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.900851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.901232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.901240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.901394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.901401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.901634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.901643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.872 [2024-07-16 00:06:29.901993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.872 [2024-07-16 00:06:29.902000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.872 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.902389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.902397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.902568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.902575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.902948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.902955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.903312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.903320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.903566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.903574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.903936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.903943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.904289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.904296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.904665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.904673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.905006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.905014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.905398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.905405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.905751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.905759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.906015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.906023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.906349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.906357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.906672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.906680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.907030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.907039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.907389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.907396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.907739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.907747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.908066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.908075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.908310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.908317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.908597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.908606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.908975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.908983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.909331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.909339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.909719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.909726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.910065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.910072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.910488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.910496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.910832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.910838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.911184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.911191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.911544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.911554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.911936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.911943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.912336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.912344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.912677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.912684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.913029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.913037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.873 qpair failed and we were unable to recover it. 00:30:14.873 [2024-07-16 00:06:29.913343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.873 [2024-07-16 00:06:29.913350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.913519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.913527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.913850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.913857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.914222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.914233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.914599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.914607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.914902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.914910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.915260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.915268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.915512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.915520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.915872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.915879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.916265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.916272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.916642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.916649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.916996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.917004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.917441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.917449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.917835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.917842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.918192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.918199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.918370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.918378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.918738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.918745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.919124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.919133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.919527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.919535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.919882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.919890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.920269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.920277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.920507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.920514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.920869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.920877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.921231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.921239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.921520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.921528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.921993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.922001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.922256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.874 [2024-07-16 00:06:29.922264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.874 qpair failed and we were unable to recover it. 00:30:14.874 [2024-07-16 00:06:29.922627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.922635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.922951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.922959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.923171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.923180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.923646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.923654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.924004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.924012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.924404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.924413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.924737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.924744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.925093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.925102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.925495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.925504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.925854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.925862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.926242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.926249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.926595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.926602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.926857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.926864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.927211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.927218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.927485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.927493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.927842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.927850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.928196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.928204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.928400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.928408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.928723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.928730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.928965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.928973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.929272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.929280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.929524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.929532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.929873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.929880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.930139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.930147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.930308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.930316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.930644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.930651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.930878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.930887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.931245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.931254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.931590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.931597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.931962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.931970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.932186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.875 [2024-07-16 00:06:29.932195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.875 qpair failed and we were unable to recover it. 00:30:14.875 [2024-07-16 00:06:29.932598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.932605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.932859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.932867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.933204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.933211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.933527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.933535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.933882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.933890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.934239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.934247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.934625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.934633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.934998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.935005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.935350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.935358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.935642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.935650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.935995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.936003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.936282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.936290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.936647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.936654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.937003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.937011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.937195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.937202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.937578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.937586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.937977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.937985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.938188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.938198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.938556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.938563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.938903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.938911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.939261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.939269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.939716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.939723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.940075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.940083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.940448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.940456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.940818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.940826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.941123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.941131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.941479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.941486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.941778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.941786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.941988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.941996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.942328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.942335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.876 [2024-07-16 00:06:29.942537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.876 [2024-07-16 00:06:29.942544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.876 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.942791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.942799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.943172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.943180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.943548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.943556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.943890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.943898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.944112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.944120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.944460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.944468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.944816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.944824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.945164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.945172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.945528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.945535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.945799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.945806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.946157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.946165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.946531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.946540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.946843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.946851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.947285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.947293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.947671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.947679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.948032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.948040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.948411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.948419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.948815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.948824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.949175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.949183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.949572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.949579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.949955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.949962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.950320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.950328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.950714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.950722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.950912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.950920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.951157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.951164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.951506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.951514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.877 [2024-07-16 00:06:29.951861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.877 [2024-07-16 00:06:29.951870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.877 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.952267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.952275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.952667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.952676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.953021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.953029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.953416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.953423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.953770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.953778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.954147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.954155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.954484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.954491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.954844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.954851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.955094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.955102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.955321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.955330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.955718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.955725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.956127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.956134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.956476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.956484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.956858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.956866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.957219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.957226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.957496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.957504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.957854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.957862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.958205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.958212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.958563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.958571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.958918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.958926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.959313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.959321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.959626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.959634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.959980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.959988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.960374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.960382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.960729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.960737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.961104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.961112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.961487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.961497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.961849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.961857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.962207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.878 [2024-07-16 00:06:29.962214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.878 qpair failed and we were unable to recover it. 00:30:14.878 [2024-07-16 00:06:29.962563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.962571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.962922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.962930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.963279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.963287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.963638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.963645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.964016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.964023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.964308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.964316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.964518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.964527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.964917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.964924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.965116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.965124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.965438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.965446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.965793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.965800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.965990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.965999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.966320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.966327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.966685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.966693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.967035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.967042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.967387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.967395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.967766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.967773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.968125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.968133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.968446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.968453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.968799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.968807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.969177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.969185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.969517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.969525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.969870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.969877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.970222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.970232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.970604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.970612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.970955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.970962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.971307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.971314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.971668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.879 [2024-07-16 00:06:29.971676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.879 qpair failed and we were unable to recover it. 00:30:14.879 [2024-07-16 00:06:29.972001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.972009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.972417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.972426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.972663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.972671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.972978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.972986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.973343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.973351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.973697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.973704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.974061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.974069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.974325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.974333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.974705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.974712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.975056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.975065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.975437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.975445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.975790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.975797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.976166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.976173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.976422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.976430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.976782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.976789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.977124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.977133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.977458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.977465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.977810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.977818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.978162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.978169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.978515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.978522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.978892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.978899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.979247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.979256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.979626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.979634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.979980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.979987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.980354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.980362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.980710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.980718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.981064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.981071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.981415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.981422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.981797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.981805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.982151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.982159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.982511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.982520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.982872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.880 [2024-07-16 00:06:29.982880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.880 qpair failed and we were unable to recover it. 00:30:14.880 [2024-07-16 00:06:29.983249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.983257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.983662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.983670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.983884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.983892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.984228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.984239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.984591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.984597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.984945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.984953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.985300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.985308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.985671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.985678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.986035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.986043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.986387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.986395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.986741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.986748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.986935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.986943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.987293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.987301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.987653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.987660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.987915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.987923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.988275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.988282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.988513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.988520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.988866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.988876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.989219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.989227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.989594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.989602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.989968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.989975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.990322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.990330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.990683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.990691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.991112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.991121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.991460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.991468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.991818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.991825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.992183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.992190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.992536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.992544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.992915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.992924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.993175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.993182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.993525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.993533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.993878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.993886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.881 [2024-07-16 00:06:29.994256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.881 [2024-07-16 00:06:29.994265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.881 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.994625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.994633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.994970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.994978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.995175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.995184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.995495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.995503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.995847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.995855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.996200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.996208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.996546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.996554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.996931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.996940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.997287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.997295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.997640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.997647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.997993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.998000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.998211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.998219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.998522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.998530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.998874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.998882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.999273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.999282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.999643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:29.999650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:29.999997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.000004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.000184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.000193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.000537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.000545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.000910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.000917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.001280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.001288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.001657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.001664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.002021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.002029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.002335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.002344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.003147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.003165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.003715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.882 [2024-07-16 00:06:30.003724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.882 qpair failed and we were unable to recover it. 00:30:14.882 [2024-07-16 00:06:30.004076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.004084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.004332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.004340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.004582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.004590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.004973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.004981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.005247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.005255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.005649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.005657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.006091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.006099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.006493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.006500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.006880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.006888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.007275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.007283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.007503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.007511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.007853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.007861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.008236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.008244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.008636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.008644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.008995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.009002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.009395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.009403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.009751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.009759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.009961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.009970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.010333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.010341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.010701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.010709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.010933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.010941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.011240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.011249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.011513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.011520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.011870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.011878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.012222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.012235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.012578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.012586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.012935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.012944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.013260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.883 [2024-07-16 00:06:30.013267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.883 qpair failed and we were unable to recover it. 00:30:14.883 [2024-07-16 00:06:30.013640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.013647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.014016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.014024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.014388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.014396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.014748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.014757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.014977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.014984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.015206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.015214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.015352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.015359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.015652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.015660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.015907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.015916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.016296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.016304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.016677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.016686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.017040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.017048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.017422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.017430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.017677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.017685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.017983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.017991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.018202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.018210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.018467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.018474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.018775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.018783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.018995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.019003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.019357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.019365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.019718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.019726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.020114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.020121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.020364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.020372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.020774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.020781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.021045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.021053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.021441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.021449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.021805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.021813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.022195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.022203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.022562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.022570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.022818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.022827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.023253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.023262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.023512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.884 [2024-07-16 00:06:30.023521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.884 qpair failed and we were unable to recover it. 00:30:14.884 [2024-07-16 00:06:30.023744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.023752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.024056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.024065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.024282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.024291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.024650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.024659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.025031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.025039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.025349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.025358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.025674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.025682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.026048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.026056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.026350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.026359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.026759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.026767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.027113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.027121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.027492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.027501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.027837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.027845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.028263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.028271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.028457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.028465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.028802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.028809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.029192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.029200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.029428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.029437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.029697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.029707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.030058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.030066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.030418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.030426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.030741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.030748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.031103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.031113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.031311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.031319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.031710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.031718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.032086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.032095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:14.885 [2024-07-16 00:06:30.032308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.885 [2024-07-16 00:06:30.032315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:14.885 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.032657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.032667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.032998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.033007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.033352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.033361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.033726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.033734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.034079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.034087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.034456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.034464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.034655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.034663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.034995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.035003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.035344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.035353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.035665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.035674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.036018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.036026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.036372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.036380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.036697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.036705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.037078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.037085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.037469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.037477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.037824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.037833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.038075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.038084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.038289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.038298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.038609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.038619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.038830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.038840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.039186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.039195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.039541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.039550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.039925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.039933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.040249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.040257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.040607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.040615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.041000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.041009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.041430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.041438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.159 [2024-07-16 00:06:30.041783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.159 [2024-07-16 00:06:30.041791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.159 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.042133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.042140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.042324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.042333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.042666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.042674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.043040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.043049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.043332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.043340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.043690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.043698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.044075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.044082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.044429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.044437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.044790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.044797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.045161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.045169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.045508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.045517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.045861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.045870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.046198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.046207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.046424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.046433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.046800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.046809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.047156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.047164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.047512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.047519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.047851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.047859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.048222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.048234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.048619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.048626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.048960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.048968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.049328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.049336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.049664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.049673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.050001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.050009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.050353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.050362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.050583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.050592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.050969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.050977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.051314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.051323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.051667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.051675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.051929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.051936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.052277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.052285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.052673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.052682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.160 [2024-07-16 00:06:30.053027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.160 [2024-07-16 00:06:30.053035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.160 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.053363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.053371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.053714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.053722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.054067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.054075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.054312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.054320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.054674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.054682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.055013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.055021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.055329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.055338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.055719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.055726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.056058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.056066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.056431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.056439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.056784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.056793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.057146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.057154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.057342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.057351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.057681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.057689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.058034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.058042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.058221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.058241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.058584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.058593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.058959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.058966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.059312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.059320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.059632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.059640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.060007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.060015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.060393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.060401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.060749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.060757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.061110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.061119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.061318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.061327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.061516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.061524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.061836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.061844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 [2024-07-16 00:06:30.061957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.161 [2024-07-16 00:06:30.061965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.161 qpair failed and we were unable to recover it. 00:30:15.161 Read completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Read completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Read completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Read completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Read completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Write completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Write completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Read completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Write completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Write completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Read completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Read completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Write completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Write completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Write completed with error (sct=0, sc=8) 00:30:15.161 starting I/O failed 00:30:15.161 Read completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Read completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Read completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Read completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Write completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Write completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Read completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Read completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Write completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Read completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Write completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Write completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Write completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Write completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Read completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Read completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 Write completed with error (sct=0, sc=8) 00:30:15.162 starting I/O failed 00:30:15.162 [2024-07-16 00:06:30.062697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.162 [2024-07-16 00:06:30.063193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.063247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c0000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.063702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.063732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c0000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.064106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.064145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c0000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.064531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.064540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.064913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.064921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.065261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.065269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.065620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.065628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.065968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.065975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.066350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.066359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.066734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.066743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.067082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.067091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.067436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.067444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.067809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.067817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.068001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.068010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.068353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.068361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.068702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.068709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.069080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.069088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.069456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.069464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.069802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.069809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.070174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.070181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.070508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.070516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.070880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.070887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.071242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.071251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.071624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.071633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.072001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.072009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.072438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.072467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.072816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.162 [2024-07-16 00:06:30.072825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.162 qpair failed and we were unable to recover it. 00:30:15.162 [2024-07-16 00:06:30.073203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.073211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.073561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.073570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.073948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.073956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.074306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.074315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.074583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.074591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.075013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.075021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.075349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.075357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.075719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.075727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.076094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.076102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.076472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.076480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.076850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.076858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.077211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.077219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.077571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.077579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.077944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.077952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.078427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.078456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.078815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.078829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.079042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.079051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.079285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.079300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.079609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.079618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.079961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.079969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.080322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.080330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.080674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.080681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.081061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.081069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.081441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.081449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.081867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.081874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.082189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.082196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.082567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.082575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.082921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.082930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.083274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.083282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.083636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.083643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.083991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.083999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.084344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.084352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.084702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.163 [2024-07-16 00:06:30.084710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.163 qpair failed and we were unable to recover it. 00:30:15.163 [2024-07-16 00:06:30.085082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.085090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.085344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.085352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.085696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.085704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.086045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.086053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.086397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.086406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.086823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.086832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.087181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.087189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.087454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.087463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.087788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.087797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.088126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.088134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.088281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.088289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.088620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.088628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.089008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.089016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.089245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.089254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.089649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.089657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.090025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.090033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.090299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.090307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.090388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.090395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.090671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.090678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.091065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.091072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.091357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.091366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.091513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.091519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.091793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.091804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.092157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.092165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.092307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.092315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.092494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.092502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.092758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.164 [2024-07-16 00:06:30.092767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.164 qpair failed and we were unable to recover it. 00:30:15.164 [2024-07-16 00:06:30.093102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.093110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.093458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.093466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.093829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.093837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.094186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.094193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.094483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.094491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.094842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.094849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.095238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.095247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.095473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.095481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.095711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.095720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.096077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.096084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.096456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.096464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.096810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.096818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.097167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.097174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.097522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.097530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.097898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.097905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.098256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.098264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.098373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.098380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.098696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.098704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.098960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.098968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.099308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.099317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.099660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.099667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.099944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.099951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.100292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.100301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.100569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.100577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.100921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.100928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.101133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.101142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.101458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.101466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.101824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.101831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.102182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.102189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.102565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.102574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.102942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.102950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.103294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.103302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.103655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.165 [2024-07-16 00:06:30.103662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.165 qpair failed and we were unable to recover it. 00:30:15.165 [2024-07-16 00:06:30.103918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.103925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.104326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.104334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.104636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.104645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.105036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.105044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.105379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.105387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.105737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.105745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.106089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.106097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.106439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.106446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.106778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.106786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.107109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.107117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.107488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.107496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.107849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.107857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.108226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.108238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.108596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.108604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.108948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.108956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.109308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.109316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.109659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.109667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.110035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.110043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.110393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.110401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.110754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.110762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.111129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.111137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.111482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.111490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.111821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.111829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.112058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.112066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.112434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.112442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.112810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.112818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.113164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.113171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.113383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.113393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.113762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.113769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.114145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.114153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.114502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.114510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.114854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.114862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.115239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.166 [2024-07-16 00:06:30.115247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.166 qpair failed and we were unable to recover it. 00:30:15.166 [2024-07-16 00:06:30.115566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.115574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.115926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.115934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.116325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.116333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.116516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.116525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.116727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.116735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.117071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.117078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.117428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.117436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.117816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.117824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.118016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.118023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.118384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.118394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.118764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.118771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.119010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.119017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.119356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.119364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.119582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.119591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.119782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.119790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.120141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.120149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.120493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.120501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.120850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.120857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.121206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.121214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.121583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.121591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.121959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.121968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.122315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.122324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.122666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.122673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.122866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.122874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.123254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.123262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.123601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.123608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.123943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.123951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.124220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.124227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.124591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.124598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.124946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.124953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.125299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.125307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.125689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.125696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.126057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.167 [2024-07-16 00:06:30.126065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.167 qpair failed and we were unable to recover it. 00:30:15.167 [2024-07-16 00:06:30.126313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.126321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.126675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.126682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.127056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.127064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.127287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.127295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.127642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.127650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.127997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.128006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.128372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.128379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.128745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.128753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.129099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.129107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.129474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.129482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.129791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.129799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.130174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.130182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.130412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.130419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.130632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.130641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.131003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.131010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.131368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.131375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.131758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.131768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.132079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.132086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.132442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.132450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.132774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.132782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.132983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.132991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.133336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.133344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.133720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.133727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.134095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.134103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.134442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.134450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.134797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.134804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.135170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.135178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.135518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.135526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.135871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.135879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.136226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.136240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.136589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.136597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.136961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.136968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.137313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.168 [2024-07-16 00:06:30.137321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.168 qpair failed and we were unable to recover it. 00:30:15.168 [2024-07-16 00:06:30.137563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.137570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.137904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.137912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.138284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.138292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.138627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.138635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.138981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.138988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.139334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.139343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.139681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.139689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.140056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.140065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.140409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.140417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.140746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.140754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.141124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.141131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.141469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.141477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.141813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.141821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.142194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.142201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.142566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.142573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.142921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.142928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.143112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.143120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.143453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.143461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.143834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.143842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.144184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.144192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.144537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.144545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.144877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.144884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.145259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.145267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.145612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.145622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.145965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.145974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.146335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.146343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.146717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.146725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.147075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.147083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.169 qpair failed and we were unable to recover it. 00:30:15.169 [2024-07-16 00:06:30.147430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.169 [2024-07-16 00:06:30.147438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.147756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.147764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.148132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.148139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.148503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.148511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.148846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.148854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.149083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.149091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.149452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.149460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.149811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.149819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.150168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.150176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.150552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.150560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.150928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.150936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.151280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.151288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.151705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.151712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.152041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.152049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.152414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.152422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.152770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.152778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.152992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.153001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.153383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.153391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.153759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.153766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.153926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.153934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.154289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.154296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.154666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.154674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.154988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.154995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.155337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.155345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.155697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.155705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.156087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.156094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.156461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.156468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.156814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.156821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.157138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.157146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.157497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.157505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.157881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.157889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.158333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.158340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.158710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.158718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.170 qpair failed and we were unable to recover it. 00:30:15.170 [2024-07-16 00:06:30.159047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.170 [2024-07-16 00:06:30.159054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.159320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.159328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.159684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.159693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.160121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.160128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.160489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.160497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.160902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.160910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.161247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.161255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.161606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.161612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.161825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.161833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.162141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.162148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.162496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.162504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.162909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.162917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.163123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.163131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.163480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.163488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.163834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.163842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.164191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.164200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.164553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.164561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.164922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.164931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.165341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.165349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.165695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.165703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.166070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.166078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.166423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.166430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.166777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.166784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.167033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.167041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.167319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.167327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.167715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.167723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.168063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.168071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.168417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.168425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.168790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.168798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.169171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.169179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.169530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.169538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.169881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.169889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.170259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.171 [2024-07-16 00:06:30.170268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.171 qpair failed and we were unable to recover it. 00:30:15.171 [2024-07-16 00:06:30.170624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.170631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.170981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.170988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.171305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.171312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.171692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.171699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.172069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.172077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.172371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.172378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.172733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.172741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.173075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.173083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.173426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.173433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.173782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.173791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.174030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.174038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.174188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.174195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.174516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.174524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.174759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.174767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.175117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.175124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.175335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.175344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.175662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.175670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.176016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.176024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.176386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.176394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.176621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.176629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.177006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.177014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.177397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.177405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.177753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.177761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.178012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.178020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.178373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.178380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.178598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.178606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.178854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.178861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.179210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.179218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.179576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.179584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.179932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.179940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.180294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.180302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.180696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.180704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.181020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.172 [2024-07-16 00:06:30.181028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.172 qpair failed and we were unable to recover it. 00:30:15.172 [2024-07-16 00:06:30.181369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.181377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.181730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.181738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.182091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.182099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.182473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.182481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.182696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.182705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.183056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.183064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.183391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.183399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.183799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.183806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.184147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.184155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.184475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.184483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.184858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.184866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.185224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.185235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.185601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.185609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.185954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.185963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.186330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.186338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.186689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.186697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.187048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.187057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.187408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.187416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.187752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.187760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.188137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.188144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.188327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.188336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.188701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.188708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.189077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.189084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.189346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.189354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.189580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.189588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.189830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.189837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.190187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.190195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.190394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.190403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.190775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.190783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.191131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.191139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.191356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.191364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.191730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.191738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.173 [2024-07-16 00:06:30.192112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.173 [2024-07-16 00:06:30.192121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.173 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.192510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.192517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.192871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.192878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.193249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.193257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.193614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.193622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.194002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.194010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.194376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.194384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.194746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.194753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.194954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.194962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.195278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.195286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.195629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.195637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.195837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.195844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.196178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.196185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.196508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.196516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.196727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.196736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.196949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.196957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.197287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.197295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.197671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.197678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.198070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.198077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.198212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.198220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.198544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.198552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.198893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.198901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.199246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.199254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.199603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.199611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.199980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.199989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.200243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.200251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.200582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.200589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.200967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.200974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.201351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.174 [2024-07-16 00:06:30.201359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.174 qpair failed and we were unable to recover it. 00:30:15.174 [2024-07-16 00:06:30.201679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.201687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.202034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.202041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.202396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.202404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.202778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.202786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.203133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.203141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.203503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.203512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.203884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.203892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.204258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.204266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.204628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.204635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.204986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.204993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.205364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.205372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.205781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.205788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.206034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.206041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.206417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.206425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.206800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.206808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.207171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.207179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.207518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.207526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.207874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.207881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.208258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.208266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.208653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.208660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.209017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.209025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.209383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.209391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.209753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.209764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.210076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.210084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.210439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.210447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.210794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.210802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.211137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.211144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.211496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.211504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.211853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.211860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.212207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.212214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.175 qpair failed and we were unable to recover it. 00:30:15.175 [2024-07-16 00:06:30.212601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.175 [2024-07-16 00:06:30.212610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.212973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.212981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.213323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.213330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.213753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.213761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.214089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.214096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.214440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.214448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.214797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.214804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.215151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.215159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.215510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.215518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.215839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.215848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.216032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.216041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.216270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.216278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.216630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.216637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.217002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.217009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.217313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.217321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.217669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.217677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.218052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.218060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.218402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.218409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.218762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.218770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.218955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.218963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.219298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.219306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.219633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.219641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.219973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.219980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.220328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.220336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.220697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.220705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.221069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.221077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.221425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.221433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.221787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.221796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.222124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.222132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.222502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.222510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.222855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.222863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.223046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.223054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.223398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.223407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.176 qpair failed and we were unable to recover it. 00:30:15.176 [2024-07-16 00:06:30.223779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.176 [2024-07-16 00:06:30.223787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.224152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.224160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.224515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.224524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.224892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.224900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.225178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.225185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.225524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.225532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.225877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.225885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.226260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.226267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.226639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.226646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.227023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.227031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.227383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.227391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.227754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.227762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.228131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.228454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.228463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.228806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.228814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.229136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.229143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.229496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.229505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.229716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.229724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.230060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.230068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.230414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.230422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.230772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.230780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.231144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.231153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.231491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.231500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.231863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.231871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.232277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.232285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.232653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.232661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.233006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.233014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.233341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.233349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.233716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.233724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.234085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.234093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.234439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.234447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.234819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.234827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.235191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.177 [2024-07-16 00:06:30.235199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.177 qpair failed and we were unable to recover it. 00:30:15.177 [2024-07-16 00:06:30.235583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.235591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.235935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.235942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.236311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.236319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.236536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.236544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.236857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.236865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.237209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.237217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.237585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.237595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.237926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.237934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.238245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.238254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.238615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.238622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.238987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.238995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.239338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.239346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.239707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.239715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.240051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.240058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.240400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.240407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.240770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.240778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.241123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.241130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.241501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.241510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.241836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.241844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.242210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.242218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.242581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.242589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.242940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.242947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.243318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.243326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.243728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.243736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.244054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.244062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.244410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.244417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.244787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.244795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.245159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.245166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.245497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.245505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.245851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.245859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.246240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.246249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.246610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.246618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.246972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.178 [2024-07-16 00:06:30.246980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.178 qpair failed and we were unable to recover it. 00:30:15.178 [2024-07-16 00:06:30.247333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.247341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.247703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.247711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.248078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.248085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.248440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.248447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.248793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.248800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.249172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.249179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.249484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.249492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.249844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.249852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.250160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.250168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.250513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.250520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.250881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.250888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.251238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.251246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.251597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.251605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.251972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.251980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.252354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.252362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.252711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.252719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.253070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.253078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.253494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.253502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.253866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.253875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.254216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.254223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.254568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.254576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.254945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.254952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.255338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.255345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.255706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.255714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.255973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.255980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.256388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.256396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.256602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.256611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.256920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.256928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.257294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.257302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.257550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.257558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.257812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.257819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.258156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.258164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.258458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.179 [2024-07-16 00:06:30.258466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.179 qpair failed and we were unable to recover it. 00:30:15.179 [2024-07-16 00:06:30.258770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.258778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.259164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.259172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.259388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.259396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.259745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.259753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.260118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.260126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.260447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.260454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.260800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.260807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.261019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.261027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.261357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.261366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.261684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.261692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.261996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.262004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.262349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.262357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.262756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.262764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.263141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.263149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.263506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.263514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.263884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.263892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.264259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.264267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.264593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.264601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.264843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.264850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.265054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.265063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.265394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.265403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.265780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.265787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.265999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.266008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.266258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.266266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.266472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.266480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.266709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.266717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.267049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.267057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.267404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.267411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.267821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.180 [2024-07-16 00:06:30.267828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.180 qpair failed and we were unable to recover it. 00:30:15.180 [2024-07-16 00:06:30.268197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.268204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.268555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.268562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.268899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.268907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.269275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.269283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.269630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.269637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.269988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.269996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.270339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.270348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.270725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.270734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.271073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.271080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.271426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.271434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.271625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.271633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.271937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.271945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.272284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.272292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.272661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.272668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.273004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.273012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.273227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.273248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.273607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.273615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.273962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.273970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.274317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.274326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.274548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.274565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.274938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.274945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.275292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.275300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.275495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.275503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.275889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.275896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.276103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.276111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.276447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.276454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.276804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.276812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.277162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.277170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.277491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.277499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.277848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.277855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.278209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.278217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.278633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.278644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.279013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.181 [2024-07-16 00:06:30.279021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.181 qpair failed and we were unable to recover it. 00:30:15.181 [2024-07-16 00:06:30.279365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.279374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.279792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.279800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.280162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.280169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.280511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.280519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.280863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.280871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.281166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.281174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.281515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.281522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.281889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.281896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.282241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.282249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.282561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.282568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.282932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.282940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.283311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.283319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.283661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.283668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.284012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.284019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.284381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.284389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.284755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.284763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.285144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.285152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.285496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.285504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.285584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.285591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.285921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.285929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.286293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.286301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.286645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.286652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.286839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.286847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.287181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.287188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.287565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.287573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.287901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.287909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.288176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.288184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.288437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.288445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.288817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.288824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.289070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.289078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.289433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.289441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.289776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.289784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.290153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.182 [2024-07-16 00:06:30.290161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.182 qpair failed and we were unable to recover it. 00:30:15.182 [2024-07-16 00:06:30.290497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.290505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.290852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.290859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.291225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.291236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.291596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.291604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.291963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.291971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.292320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.292330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.292707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.292715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.293083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.293090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.293440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.293448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.293794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.293802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.294172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.294180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.294532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.294540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.294884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.294892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.295200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.295208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.295551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.295559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.295926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.295934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.296151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.296159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.296501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.296508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.296878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.296886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.297256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.297264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.297652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.297660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.298005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.298013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.298378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.298386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.298759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.298766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.299017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.299026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.299418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.299426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.299791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.299800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.300174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.300181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.300513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.300521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.300868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.300876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.301245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.301253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.301505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.301513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.301859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.183 [2024-07-16 00:06:30.301867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.183 qpair failed and we were unable to recover it. 00:30:15.183 [2024-07-16 00:06:30.302210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.302219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.302588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.302596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.302929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.302937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.303310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.303317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.303659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.303666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.304032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.304040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.304385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.304393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.304564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.304573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.304796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.304803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.305137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.305145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.305516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.305524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.305870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.305878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.306223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.306235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.306594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.306602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.306967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.306975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.307342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.307350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.307708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.307716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.308082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.308090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.308467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.308475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.308840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.308849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.309196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.309204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.309545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.309553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.309928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.309935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.310293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.310301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.310658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.310665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.311036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.311044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.311419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.311428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.311776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.311784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.312137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.312145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.312483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.312491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.312873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.312880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.313240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.313247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.313591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.184 [2024-07-16 00:06:30.313599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.184 qpair failed and we were unable to recover it. 00:30:15.184 [2024-07-16 00:06:30.313968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.313975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.314342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.314350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.314699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.314707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.315051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.315059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.315435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.315443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.315816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.315824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.316190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.316198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.316587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.316595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.316960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.316967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.317225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.317236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.317481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.317488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.317835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.317843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.318209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.318217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.318586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.318594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.318672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.318679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.319005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.319012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.319333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.319340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.319570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.319579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.319945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.319953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.320293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.320302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.320687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.320695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.321070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.321078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.321450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.321458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.321824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.321833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.322199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.322208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.322544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.322552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.322922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.322930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.323302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.323310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.185 [2024-07-16 00:06:30.323662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.185 [2024-07-16 00:06:30.323670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.185 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.324039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.324046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.324380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.324388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.324739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.324746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.325093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.325101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.325482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.325491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.325866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.325874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.326218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.326227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.326596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.326603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.327013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.327020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.327517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.327545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.327900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.327910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.328268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.328276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.328611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.328619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.328962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.328971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.329327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.329335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.329690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.329698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.330067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.330075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.330426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.330435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.330805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.330814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.331213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.331221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.331610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.331618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.331981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.331989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.332200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.332210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.332560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.332568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.332943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.332951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.333329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.333337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.333724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.333731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.334067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.334076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.334410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.334418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.334597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.334606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.334964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.334974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.335329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.335337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.186 [2024-07-16 00:06:30.335685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.186 [2024-07-16 00:06:30.335693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.186 qpair failed and we were unable to recover it. 00:30:15.187 [2024-07-16 00:06:30.336049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.187 [2024-07-16 00:06:30.336056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.187 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.336399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.336409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.336749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.336758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.336976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.336984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.337335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.337344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.337701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.337710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.338053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.338062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.338252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.338262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.338590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.338599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.338943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.338951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.473 [2024-07-16 00:06:30.339319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.473 [2024-07-16 00:06:30.339327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.473 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.339658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.339667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.340035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.340042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.340411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.340419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.340767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.340774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.341143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.341151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.341495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.341503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.341850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.341857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.342211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.342218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.342426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.342435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.342800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.342808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.343235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.343243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.343554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.343563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.343907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.343914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.344153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.344161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.344499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.344507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.344851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.344858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.345050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.345058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.345377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.345384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.345734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.345742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.346090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.346098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.346465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.346473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.346852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.346859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.347205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.347213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.347568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.347576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.347959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.347967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.348336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.348344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.348749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.348762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.349114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.349122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.349512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.349520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.349700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.349708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.350047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.350054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.474 [2024-07-16 00:06:30.350403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.474 [2024-07-16 00:06:30.350411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.474 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.350756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.350763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.351121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.351129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.351369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.351377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.351632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.351640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.352009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.352017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.352376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.352384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.352733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.352741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.353084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.353091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.353465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.353473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.353839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.353848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.354204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.354213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.354558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.354565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.354945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.354953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.355334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.355342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.355703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.355711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.356139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.356147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.356477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.356485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.356853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.356862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.357198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.357206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.357496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.357504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.357872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.357880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.358247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.358255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.358602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.358609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.358957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.358965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.359349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.359358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.359729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.359737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.360076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.360084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.360396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.360405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.360772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.360780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.361152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.361160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.361335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.361345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.361683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.475 [2024-07-16 00:06:30.361691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.475 qpair failed and we were unable to recover it. 00:30:15.475 [2024-07-16 00:06:30.362034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.362042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.362424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.362432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.362622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.362633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.362798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.362806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.363118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.363126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.363377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.363385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.363731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.363741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.364091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.364099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.364444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.364452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.364819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.364826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.365156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.365165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.365516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.365523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.365866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.365874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.366239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.366247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.366581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.366589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.366944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.366952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.367300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.367308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.367652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.367660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.368024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.368031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.368377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.368385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.368731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.368739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.369110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.369118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.369456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.369464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.369808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.369816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.370163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.370173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.370513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.370521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.370886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.370894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.371238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.371246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.371638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.371646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.372013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.372021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.372394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.372402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.372750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.372757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.476 [2024-07-16 00:06:30.373101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.476 [2024-07-16 00:06:30.373110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.476 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.373319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.373328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.373674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.373682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.374026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.374034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.374382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.374389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.374769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.374776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.375118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.375126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.375462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.375470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.375814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.375822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.376187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.376195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.376566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.376578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.376923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.376931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.377317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.377325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.377703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.377711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.378080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.378088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.378435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.378443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.378786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.378794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.379170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.379178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.379518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.379526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.379871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.379879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.380216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.380224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.380437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.380446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.380817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.380826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.381133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.381142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.381342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.381351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.381682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.381690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.382057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.382064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.382294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.382302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.477 [2024-07-16 00:06:30.382643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.477 [2024-07-16 00:06:30.382650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.477 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.383027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.383035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.383418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.383426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.383762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.383770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.384052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.384060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.384436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.384444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.384634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.384642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.384865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.384874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.385226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.385238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.385613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.385621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.385988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.385996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.386367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.386374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.386719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.386726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.387103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.387111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.387481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.387489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.387658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.387667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.388025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.388032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.388218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.388226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.388604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.388612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.388959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.388967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.389339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.389347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.389660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.389668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.390007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.478 [2024-07-16 00:06:30.390014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.478 qpair failed and we were unable to recover it. 00:30:15.478 [2024-07-16 00:06:30.390371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.390379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.390723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.390730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.390925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.390934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.391299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.391313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.391655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.391663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.392008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.392015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.392385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.392392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.392761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.392769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.393178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.393186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.393531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.393538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.393906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.393915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.394249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.394257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.394443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.394451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.394774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.394782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.395163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.395170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.395522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.479 [2024-07-16 00:06:30.395530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.479 qpair failed and we were unable to recover it. 00:30:15.479 [2024-07-16 00:06:30.395740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.395749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.396096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.396104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.396472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.396480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.396848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.396856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.397199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.397207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.397555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.397562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.397937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.397944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.398168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.398176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.398525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.398533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.398882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.398890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.399267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.399277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.399658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.399666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.400012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.400019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.400364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.400372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.400740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.400748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.401114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.401121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.401309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.401318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.401641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.401649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.401993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.402000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.402378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.402386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.402729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.402737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.403080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.403088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.403454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.403462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.480 qpair failed and we were unable to recover it. 00:30:15.480 [2024-07-16 00:06:30.403787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.480 [2024-07-16 00:06:30.403795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.404153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.404161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.404596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.404604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.404940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.404948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.405318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.405326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.405670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.405678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.406020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.406027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.406396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.406404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.406771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.406779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.407134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.407142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.407492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.407501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.407867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.407875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.408254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.408262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.408616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.408623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.408972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.408980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.409351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.409359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.409708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.409715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.410059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.410067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.410490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.410498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.410834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.410841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.411052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.411060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.411293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.411301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.411530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.411538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.481 [2024-07-16 00:06:30.411908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.481 [2024-07-16 00:06:30.411915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.481 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.412294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.412302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.412531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.412539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.412877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.412885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.413265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.413275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.413642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.413650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.413995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.414004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.414343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.414351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.414705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.414712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.415080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.415088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.415437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.415446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.415789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.415797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.415986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.415994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.416358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.416365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.416711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.416719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.417065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.417072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.417407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.417415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.417779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.417786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.418129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.418136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.418497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.418505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.418869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.418878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.419243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.419251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.419605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.419612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.482 qpair failed and we were unable to recover it. 00:30:15.482 [2024-07-16 00:06:30.419797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.482 [2024-07-16 00:06:30.419805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.420129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.420136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.420496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.420504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.420757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.420765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.421134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.421142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.421482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.421489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.421856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.421864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.422195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.422203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.422466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.422474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.422839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.422847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.423033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.423041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.423374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.423381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.423728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.423736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.424104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.424113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.424486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.424494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.424838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.424845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.425190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.425198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.425563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.425570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.425944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.425951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.426331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.426339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.426690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.426698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.427075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.427085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.427271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.483 [2024-07-16 00:06:30.427279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.483 qpair failed and we were unable to recover it. 00:30:15.483 [2024-07-16 00:06:30.427638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.427646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.427991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.427999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.428368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.428376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.428742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.428750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.429096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.429103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.429316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.429325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.429693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.429701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.430043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.430051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.430396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.430404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.430748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.430756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.430966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.430974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.431236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.431244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.431575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.431582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.431929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.431937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.432307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.432315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.432653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.432660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.433005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.433013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.433269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.433276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.433612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.433620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.433998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.434006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.434353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.434361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.434707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.434714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.435073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.435080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.435460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.435469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.435815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.435823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.484 qpair failed and we were unable to recover it. 00:30:15.484 [2024-07-16 00:06:30.436163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.484 [2024-07-16 00:06:30.436171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.436495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.436503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.436869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.436876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.437305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.437313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.437650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.437657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.438027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.438035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.438367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.438375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.438742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.438750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.439096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.439103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.439465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.439474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.439820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.439827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.440171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.440179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.440528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.440536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.440904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.440913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.441256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.441264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.441610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.441617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.441962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.441970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.442353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.442361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.442727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.442734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.442944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.442952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.443144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.443153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.443484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.443491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.443855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.443863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.485 qpair failed and we were unable to recover it. 00:30:15.485 [2024-07-16 00:06:30.444208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.485 [2024-07-16 00:06:30.444216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.444564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.444572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.444946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.444954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.445332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.445340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.445530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.445539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.445847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.445855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.446059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.446067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.446414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.446422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.446775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.446782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.447128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.447135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.447509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.447517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.447882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.447889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.448236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.448244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.448582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.448589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.448958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.448966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.449303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.449311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.449665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.449672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.449862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.449870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.450236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.450244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.450609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.450616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.451003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.451011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.451357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.451365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.451673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.451680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.451879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.451887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.486 qpair failed and we were unable to recover it. 00:30:15.486 [2024-07-16 00:06:30.452201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.486 [2024-07-16 00:06:30.452208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.452528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.452536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.452901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.452909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.453287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.453295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.453629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.453636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.453984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.453993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.454366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.454376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.454757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.454765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.455160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.455167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.455501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.455509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.455888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.455896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.456268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.456275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.456712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.456719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.457061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.457068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.457438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.457445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.457803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.457810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.458158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.458165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.458404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.458411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.458789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.458797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.459001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.459009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.459325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.459333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.459679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.459686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.460055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.460062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.487 [2024-07-16 00:06:30.460408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.487 [2024-07-16 00:06:30.460416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.487 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.460837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.460844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.461179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.461187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.461415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.461423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.461796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.461803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.462054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.462063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.462401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.462408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.462729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.462736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.463094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.463102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.463439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.463448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.463799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.463807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.464170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.464178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.464521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.464528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.464946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.464954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.465288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.465295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.465645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.465653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.466020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.466027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.466398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.466405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.466713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.466720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.467066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.467074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.467457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.467465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.467805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.467812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.468071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.468078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.468454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.468464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.468863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.468871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.469066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.469073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.469396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.469405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.469735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.469743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.470108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.470116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.470483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.470490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.470840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.470848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.471213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.471220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.471594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.488 [2024-07-16 00:06:30.471601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.488 qpair failed and we were unable to recover it. 00:30:15.488 [2024-07-16 00:06:30.471947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.471956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.472325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.472332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.472688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.472695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.473066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.473074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.473430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.473438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.473787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.473794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.474160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.474168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.474508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.474515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.474865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.474873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.475223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.475241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.475457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.475466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.475656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.475664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.475989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.475997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.476345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.476353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.476733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.476741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.477119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.477127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.477492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.477500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.477848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.477856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.478236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.478244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.478582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.478590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.478938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.478946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.479291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.479299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.479627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.479634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.480003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.480011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.480403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.480411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 664224 Killed "${NVMF_APP[@]}" "$@" 00:30:15.489 [2024-07-16 00:06:30.480748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.480756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 [2024-07-16 00:06:30.481133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.481141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:15.489 [2024-07-16 00:06:30.481521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.481529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:15.489 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:15.489 [2024-07-16 00:06:30.481885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.481893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:15.489 [2024-07-16 00:06:30.482242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.489 [2024-07-16 00:06:30.482250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.489 qpair failed and we were unable to recover it. 00:30:15.489 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.489 [2024-07-16 00:06:30.482603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.482611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.482940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.482947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.483293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.483301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.483664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.483672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.484042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.484050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.484520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.484530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.484876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.484884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.485240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.485248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.485630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.485638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.485975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.485982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.486336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.486344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.486692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.486703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.487042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.487050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.487422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.487430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.487688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.487696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.488116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.488124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.488492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.488500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.488748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.488757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.488970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.488978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.489329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.489338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8c8000b90 with addr=10.0.0.2, port=4420 00:30:15.490 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=665255 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.489436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17df800 is same with the state(5) to be set 00:30:15.490 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 665255 00:30:15.490 [2024-07-16 00:06:30.489833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.489852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 665255 ']' 00:30:15.490 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:15.490 [2024-07-16 00:06:30.490102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.490114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.490348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.490365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.490 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:30:15.490 [2024-07-16 00:06:30.490729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.490740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.490 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:30:15.490 [2024-07-16 00:06:30.491114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.491126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 00:06:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.490 [2024-07-16 00:06:30.491366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.491379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.491613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.491624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.490 [2024-07-16 00:06:30.491862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.490 [2024-07-16 00:06:30.491873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.490 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.492108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.492120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.492315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.492327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.492668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.492680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.492904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.492915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.493249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.493261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.493512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.493529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.493910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.493921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.494372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.494384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.494730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.494741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.495094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.495106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.495380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.495392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.495746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.495758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.496111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.496122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.496372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.496384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.496743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.496755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.497113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.497124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.497383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.497394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.497745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.497756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.498120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.498132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.498578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.498590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.498932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.498943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.499294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.499306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.499465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.499476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.499851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.499862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.500082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.500096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.500379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.500392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.500748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.500759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.501101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.501112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.501474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.501485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.501840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.501852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.502087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.502098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.502506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.491 [2024-07-16 00:06:30.502518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.491 qpair failed and we were unable to recover it. 00:30:15.491 [2024-07-16 00:06:30.502868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.502882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.503260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.503272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.503650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.503662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.503913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.503924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.504310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.504321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.504572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.504583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.504895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.504907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.505285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.505297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.505532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.505543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.505780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.505791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.506135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.506147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.506493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.506505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.506856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.506868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.507213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.507224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.507616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.507628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.508018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.508028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.508400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.508411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.508773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.508784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.509150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.509161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.509498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.509508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.509884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.509895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.510157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.510168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.510522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.510534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.510911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.510922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.511277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.511287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.511665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.511676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.512045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.492 [2024-07-16 00:06:30.512055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-07-16 00:06:30.512266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.512279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.512656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.512666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.513023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.513033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.513390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.513400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.513656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.513666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.514048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.514059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.514453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.514464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.514669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.514680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.514938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.514948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.515302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.515314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.515677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.515687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.515941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.515951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.516306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.516317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.516686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.516697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.517077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.517088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.517283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.517293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.517588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.517598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.517877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.517888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.518262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.518273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.518539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.518549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.518949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.518959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.519325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.519336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.519783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.519793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.520019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.520031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.520405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.520416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.520728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.520739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.521105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.521115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.521386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.521396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.521707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.521717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.522103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.522113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.522489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.522499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.522851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.522862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.523120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.493 [2024-07-16 00:06:30.523131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-07-16 00:06:30.523295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.523305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.523710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.523720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.523986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.523997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.524394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.524405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.524761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.524771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.525124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.525135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.525380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.525391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.525710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.525721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.526107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.526118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.526492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.526503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.526746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.526756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.527136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.527148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.527504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.527514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.527874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.527884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.528114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.528125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.528518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.528529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.528877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.528887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.529242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.529253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.529610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.529621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.529974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.529986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.530233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.530246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.530607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.530617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.530832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.530851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.531227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.531240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.531508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.531517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.531865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.531874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.532073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.532083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.532316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.494 [2024-07-16 00:06:30.532326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-07-16 00:06:30.532590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.532602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.532787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.532797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.533057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.533067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.533433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.533443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.533823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.533834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.534236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.534247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.534589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.534599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.534788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.534801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.535037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.535048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.535369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.535380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.535741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.535751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.535943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.535952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.536296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.536307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.536373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.536383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.536573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.536584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.536998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.537007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.537228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.537242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.537637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.537647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.537988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.537998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.538365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.538375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.538720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.538729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.539087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.539098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.539486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.495 [2024-07-16 00:06:30.539496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.495 qpair failed and we were unable to recover it. 00:30:15.495 [2024-07-16 00:06:30.539860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.539869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.540250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.540260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.540609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.540619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.540815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.540825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.541196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.541205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.541621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.541632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.541972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.541981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.542353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.542363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.542767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.542776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.543118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.543127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.543392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.543402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.543806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.543818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.544188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.544198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.544386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.544396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.544757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.544766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.545131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.545140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.545354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.545364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.545729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.545739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.545969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.545978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.546298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.546308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.546348] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:30:15.496 [2024-07-16 00:06:30.546394] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.496 [2024-07-16 00:06:30.546648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.546657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.546741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.496 [2024-07-16 00:06:30.546750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.496 qpair failed and we were unable to recover it. 00:30:15.496 [2024-07-16 00:06:30.547074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.547083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.547464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.547474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.547848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.547857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.548310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.548320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.548703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.548713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.549078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.549087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.549565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.549575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.549965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.549975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.550330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.550339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.550720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.550730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.551110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.551120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.551339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.551348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.551711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.551720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.552101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.552110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.552577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.552588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.552978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.552989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.553208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.553217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.553474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.553484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.553728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.553738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.554128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.554137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.554498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.554508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.554861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.554870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.555254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.555265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.555659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.497 [2024-07-16 00:06:30.555669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.497 qpair failed and we were unable to recover it. 00:30:15.497 [2024-07-16 00:06:30.556030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.556039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.556270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.556280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.556614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.556623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.557031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.557040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.557387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.557396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.557788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.557797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.558020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.558029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.558352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.558362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.558795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.558804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.559168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.559177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.559536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.559546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.559931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.559941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.560302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.560312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.560502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.560511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.560696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.560706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.560974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.560983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.561318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.561328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.561734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.561743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.562103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.562114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.562495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.562505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.562875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.562884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.563107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.563116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.563494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.563504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.498 qpair failed and we were unable to recover it. 00:30:15.498 [2024-07-16 00:06:30.563866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.498 [2024-07-16 00:06:30.563875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.564256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.564266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.564508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.564518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.564846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.564855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.565216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.565225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.565577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.565587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.565963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.565973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.566336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.566346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.566728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.566738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.566987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.566996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.567400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.567410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.567652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.567661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.568056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.568065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.568461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.568470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.568676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.568685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.569032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.569041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.569403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.499 [2024-07-16 00:06:30.569412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.499 qpair failed and we were unable to recover it. 00:30:15.499 [2024-07-16 00:06:30.569748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.569757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.570135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.570144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.570482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.570491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.570874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.570883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.571307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.571317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.571696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.571707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.572158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.572167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.572393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.572402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.572843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.572853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.573071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.573080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.573442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.573452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.573835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.573844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.574210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.574219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.574619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.574628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.574968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.574977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.575346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.575356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.575598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.575608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.575976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.575985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.576223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.576252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.576511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.576521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.576893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.576903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.577272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.577281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.500 [2024-07-16 00:06:30.577525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.500 [2024-07-16 00:06:30.577535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.500 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.577980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.577989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.578441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.578450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.578849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.578858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.579113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.579122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.579496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.579505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.579872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.579881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.580081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.580090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.580276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.580286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.580659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.580668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.581006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.581015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.581228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.581241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.581566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.581577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.581804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.581813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.582164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.582174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.582363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.582373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.582786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.582795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.583061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.583070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.583459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.583469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.583670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.583679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.583996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.584005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.584374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.584384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.584609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.584619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.585013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.501 [2024-07-16 00:06:30.585022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.501 qpair failed and we were unable to recover it. 00:30:15.501 [2024-07-16 00:06:30.585419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.585432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.585651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.585661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.586002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.586011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.586424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.586434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.586824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.586833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.587202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.587211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.587608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.587618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.587868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.587877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.588076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.588085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.588440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.588450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.588839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.588848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.589232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.589242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.589582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.589592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.589921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.589930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.590294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.590303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.590706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.590715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.591051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.591060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.591314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.591324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.591728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.591738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.592084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.592094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.592449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.592459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.592867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.592877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.593252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.593261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.593489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.593498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.502 qpair failed and we were unable to recover it. 00:30:15.502 [2024-07-16 00:06:30.593746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.502 [2024-07-16 00:06:30.593755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.594097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.594106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.594474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.594483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.594845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.594856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.595241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.595250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.595626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.595635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.595998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.596008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.596372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.596381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.596604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.596614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.597004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.597013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.597221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.597237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.597578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.597588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.597933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.597942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.598326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.598335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.598702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.598711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.598919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.598928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.599290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.599300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.599688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.599698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.600039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.600049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.600211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.600220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.600468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.600478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.600615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.600624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.601025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.601035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.601397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.601407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.601769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.601778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.602146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.602155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.602557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.503 [2024-07-16 00:06:30.602566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.503 qpair failed and we were unable to recover it. 00:30:15.503 [2024-07-16 00:06:30.602926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.602935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.603305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.603315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.603690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.603699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.604074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.604085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.604447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.604457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.604839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.604848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.605212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.605221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.605583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.605592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.605958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.605967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.606178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.606187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.606414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.606424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.606772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.606782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.606982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.606992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.607337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.607346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.607721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.607730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.607874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.607885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.608239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.608248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.608663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.608672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.609036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.609045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.609438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.609448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.609804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.609813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.610177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.610187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.610502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.610512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.610748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.610758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.611122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.611132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.611448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.611458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.611655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.611666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.611915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.611924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.612270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.612279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.612500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.612510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.504 [2024-07-16 00:06:30.612886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.504 [2024-07-16 00:06:30.612896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.504 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.613276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.613286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.613398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.613406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.613759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.613769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.613988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.613997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.614245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.614254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.614653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.614662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.615028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.615037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.615400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.615409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.615743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.615752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.616139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.616148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.616547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.616556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.616919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.616928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.617247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.617257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.617639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.617650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.618046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.618055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.618425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.618434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.618794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.618803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.619167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.619176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.619380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.619390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.619618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.619627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.619990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.619999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.620209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.620219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.620487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.620497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.620698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.620707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.621073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.621082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.621447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.621457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.621525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.621533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.621853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.621862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.622205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.622214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.622572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.622582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.622944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.622953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.623331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.623341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.505 [2024-07-16 00:06:30.623672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.505 [2024-07-16 00:06:30.623681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.505 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.624044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.624053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.624420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.624430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.624834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.624844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.625207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.625216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.625448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.625458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.625831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.625840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.626202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.626212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.626463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.626474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.626863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.626872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.627240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.627250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.627633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.627642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.627857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.627867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.628111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.628120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.628515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.628524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.628864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.628873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.629241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.629251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.629588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.629597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.629855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.629865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.630247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.630256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.630601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.630610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.631005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.631015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.631272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.631283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.631681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.631691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.632014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.632023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.632389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.632398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.632619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.632628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.633021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.506 [2024-07-16 00:06:30.633031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.506 qpair failed and we were unable to recover it. 00:30:15.506 [2024-07-16 00:06:30.633399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.633408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.633609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.633618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.633965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.633974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.634233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.634242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.634533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.634542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.634756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.634765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.635029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.635038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.635433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.635445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.635826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.635835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.636198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.636207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.636576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.636585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.636948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.636957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.637322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.637331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.507 [2024-07-16 00:06:30.637549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.507 [2024-07-16 00:06:30.637559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.507 qpair failed and we were unable to recover it. 00:30:15.782 [2024-07-16 00:06:30.637825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.782 [2024-07-16 00:06:30.637836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.782 qpair failed and we were unable to recover it. 00:30:15.782 [2024-07-16 00:06:30.638225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.782 [2024-07-16 00:06:30.638239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.782 qpair failed and we were unable to recover it. 00:30:15.782 [2024-07-16 00:06:30.638597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.782 [2024-07-16 00:06:30.638606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.782 qpair failed and we were unable to recover it. 00:30:15.782 [2024-07-16 00:06:30.638976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.782 [2024-07-16 00:06:30.638985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.782 qpair failed and we were unable to recover it. 00:30:15.782 [2024-07-16 00:06:30.639164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.782 [2024-07-16 00:06:30.639173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.782 qpair failed and we were unable to recover it. 00:30:15.782 [2024-07-16 00:06:30.639554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.782 [2024-07-16 00:06:30.639564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.782 qpair failed and we were unable to recover it. 00:30:15.782 [2024-07-16 00:06:30.639677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:15.782 [2024-07-16 00:06:30.639910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.782 [2024-07-16 00:06:30.639919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.782 qpair failed and we were unable to recover it. 00:30:15.782 [2024-07-16 00:06:30.640195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.640205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.640489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.640499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.640833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.640844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.641206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.641215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.641595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.641605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.641970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.641979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.642113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.642123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.642472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.642482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.642839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.642849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.643262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.643273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.643533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.643542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.643879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.643888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.644297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.644308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.644645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.644657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.644990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.644999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.645381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.645390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.645743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.645752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.645940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.645950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.646300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.646310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.646630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.646640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.646993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.647003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.647347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.647356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.647572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.647582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.647959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.647969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.648274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.648285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.648662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.648672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.649010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.649019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.649358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.649368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.649700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.649710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.650066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.650076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.650438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.650448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.650780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.650790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.651122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.783 [2024-07-16 00:06:30.651132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.783 qpair failed and we were unable to recover it. 00:30:15.783 [2024-07-16 00:06:30.651539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.651549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.651906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.651916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.652228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.652241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.652591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.652600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.652927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.652936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.653269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.653279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.653623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.653632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.654000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.654011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.654435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.654445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.654791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.654801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.655169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.655178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.655532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.655541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.655774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.655783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.656049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.656058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.656397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.656406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.656595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.656604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.656936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.656945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.657299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.657309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.657671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.657681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.657888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.657899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.658286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.658296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.658647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.658656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.659012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.659022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.659405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.659414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.659757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.659766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.660122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.660131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.660528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.660539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.660929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.660938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.661314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.661323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.661703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.661713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.662074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.662083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.662416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.662426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.662615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.784 [2024-07-16 00:06:30.662624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.784 qpair failed and we were unable to recover it. 00:30:15.784 [2024-07-16 00:06:30.662858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.662867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.663190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.663201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.663460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.663469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.663813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.663822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.664187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.664197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.664555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.664565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.664757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.664766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.665063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.665072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.665430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.665440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.665745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.665754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.666113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.666122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.666316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.666327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.666657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.666667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.667046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.667055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.667440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.667450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.667824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.667833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.668196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.668205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.668595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.668605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.668961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.668970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.669346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.669355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.669718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.669728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.670109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.670119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.670497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.670508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.670877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.670887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.671245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.671255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.671585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.671595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.671951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.671961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.672173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.672183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.672579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.672589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.672926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.672936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.673257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.673268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.673618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.673628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.674010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.785 [2024-07-16 00:06:30.674021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.785 qpair failed and we were unable to recover it. 00:30:15.785 [2024-07-16 00:06:30.674386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.674396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.674749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.674759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.675077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.675087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.675448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.675458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.675820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.675830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.676206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.676215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.676603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.676614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.676837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.676847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.677212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.677222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.677601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.677612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.677985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.677994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.678254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.678265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.678607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.678616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.678966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.678975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.679502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.679541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.679797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.679809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.680174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.680184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.680520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.680529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.680890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.680899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.681258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.681268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.681640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.681649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.682025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.682034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.682429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.682441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.682781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.682790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.683161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.683171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.683522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.683532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.683900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.683909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.684254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.684263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.684645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.684654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.685023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.685033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.685245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.685257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.685587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.685597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.685928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.786 [2024-07-16 00:06:30.685937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.786 qpair failed and we were unable to recover it. 00:30:15.786 [2024-07-16 00:06:30.686295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.686305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.686698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.686707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.687036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.687046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.687416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.687432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.687564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.687574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.687985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.687995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.688298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.688307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.688683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.688692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.689023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.689032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.689388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.689397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.689735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.689744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.690105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.690114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.690454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.690464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.690851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.690860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.691196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.691205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.691582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.691591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.691919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.691928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.692299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.692309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.692674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.692683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.693004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.693013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.693403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.693413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.693733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.693742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.694095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.694104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.694297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.694308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.694654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.694663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.695011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.695020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.695326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.695335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.695692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.695701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.695891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.695901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.696184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.696193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.696547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.696559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.787 qpair failed and we were unable to recover it. 00:30:15.787 [2024-07-16 00:06:30.696941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.787 [2024-07-16 00:06:30.696950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.697139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.697150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.697434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.697443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.697772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.697781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.697980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.697990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.698172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.698181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.698549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.698559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.698767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.698776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.699126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.699135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.699396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.699406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.699745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.699754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.700123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.700132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.700487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.700496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.700864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.700874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.701228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.701241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.701679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.701688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.702071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.702081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.702253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.702263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.702632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.702642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.702977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.702986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.703341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.703364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.703748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.703757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.704104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.704113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.704483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.704493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.704872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.704881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.705046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.788 [2024-07-16 00:06:30.705075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.788 [2024-07-16 00:06:30.705082] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.788 [2024-07-16 00:06:30.705089] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.788 [2024-07-16 00:06:30.705098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.788 [2024-07-16 00:06:30.705250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.788 [2024-07-16 00:06:30.705260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.788 qpair failed and we were unable to recover it. 00:30:15.788 [2024-07-16 00:06:30.705275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:15.788 [2024-07-16 00:06:30.705583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.705593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.705621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:15.789 [2024-07-16 00:06:30.705756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:15.789 [2024-07-16 00:06:30.705756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:15.789 [2024-07-16 00:06:30.705932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.705942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.706147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.706155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.706551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.706561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.706892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.706902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.707266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.707277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.707630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.707639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.708025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.708034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.708372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.708381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.708673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.708683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.709036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.709045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.709492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.709502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.709793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.709803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.710171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.710180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.710527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.710537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.710895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.710905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.711261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.711272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.711517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.711527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.711754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.711763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.711972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.711981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.712329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.712340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.712697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.712706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.713059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.713068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.713427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.713436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.713815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.713827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.714140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.714149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.714504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.714514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.714760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.714770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.715109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.715118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.715337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.715347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.715691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.715701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.716062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.789 [2024-07-16 00:06:30.716071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.789 qpair failed and we were unable to recover it. 00:30:15.789 [2024-07-16 00:06:30.716430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.716440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.716804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.716814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.717176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.717185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.717429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.717439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.717777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.717787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.717872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.717881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.718114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.718123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.718276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.718286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.718564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.718574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.718949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.718959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.719301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.719311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.719685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.719695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.719822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.719832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Write completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Write completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Write completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Write completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Write completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Write completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 Read completed with error (sct=0, sc=8) 00:30:15.790 starting I/O failed 00:30:15.790 [2024-07-16 00:06:30.720599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.790 [2024-07-16 00:06:30.721114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.721155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8d0000b90 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.721633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.721721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8d0000b90 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.722106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.722118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.722570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.722579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.722934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.722943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.723184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.723193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.723407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.723417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.723763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.723773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.724023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.724032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.790 [2024-07-16 00:06:30.724271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.790 [2024-07-16 00:06:30.724281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.790 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.724686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.724696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.725076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.725085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.725295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.725305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.725627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.725637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.726001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.726010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.726381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.726390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.726744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.726753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.726991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.727001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.727407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.727417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.727809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.727819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.728194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.728204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.728561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.728571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.728930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.728939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.729299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.729309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.729548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.729557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.729762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.729772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.730175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.730184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.730546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.730556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.730923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.730932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.731319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.731329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.731675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.731685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.732022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.732031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.732385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.732394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.732608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.732617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.732996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.733005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.733379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.733389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.733767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.733776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.733990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.734000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.734454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.734463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.734838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.734847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.735037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.735048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.735289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.735299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.735667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.791 [2024-07-16 00:06:30.735676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.791 qpair failed and we were unable to recover it. 00:30:15.791 [2024-07-16 00:06:30.735994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.736003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.736360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.736369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.736751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.736760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.737118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.737126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.737530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.737540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.737900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.737909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.738273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.738283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.738613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.738623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.738704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.738712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.738939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.738948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.739287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.739297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.739541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.739552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.739803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.739812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.740066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.740076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.740316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.740325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.740671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.740680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.741035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.741044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.741383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.741393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.741611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.741620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.741871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.741881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.742248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.742259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.742616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.742626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.742980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.742990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.743359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.743369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.743746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.743758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.744140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.744149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.744484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.744494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.744877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.744887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.745251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.745261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.745468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.745478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.745777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.745787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.746142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.746151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.746481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.746491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.792 qpair failed and we were unable to recover it. 00:30:15.792 [2024-07-16 00:06:30.746832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.792 [2024-07-16 00:06:30.746842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.747222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.747240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.747603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.747613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.747859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.747868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.748070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.748080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.748259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.748269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.748506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.748515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.748882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.748892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.749110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.749120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.749464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.749474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.749862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.749871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.750222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.750236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.750596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.750605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.750974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.750983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.751179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.751188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.751533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.751543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.751877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.751886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.752247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.752257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.752656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.752665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.753003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.753012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.753377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.753387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.753620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.753629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.753824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.753835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.754188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.754197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.754574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.754583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.754780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.754790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.755164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.793 [2024-07-16 00:06:30.755173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.793 qpair failed and we were unable to recover it. 00:30:15.793 [2024-07-16 00:06:30.755517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.755527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.755893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.755902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.756265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.756275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.756659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.756668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.756986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.756995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.757365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.757375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.757753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.757762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.758017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.758027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.758278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.758288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.758681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.758690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.759026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.759035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.759395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.759404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.759788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.759797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.760053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.760062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.760447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.760457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.760798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.760807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.761021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.761031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.761352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.761361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.761725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.761735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.762092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.762101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.762492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.762501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.762768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.762776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.763110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.763119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.763489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.763498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.763854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.763863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.764227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.764240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.764594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.764603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.764817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.764827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.765132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.765141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.765458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.765468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.765823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.765832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.766204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.766213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.794 [2024-07-16 00:06:30.766595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.794 [2024-07-16 00:06:30.766607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.794 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.766967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.766975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.767337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.767347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.767720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.767729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.768077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.768086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.768269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.768278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.768613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.768622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.768960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.768969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.769331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.769340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.769681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.769690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.770021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.770030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.770433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.770443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.770843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.770852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.771191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.771200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.771591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.771601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.771869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.771878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.772040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.772051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.772362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.772372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.772782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.772791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.773007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.773016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.773385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.773394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.773742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.773751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.774112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.774121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.774490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.774499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.774858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.774867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.775238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.775248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.775633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.775642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.775998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.776009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.776387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.776397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.776790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.776800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.777157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.777167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.777547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.777557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.777893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.777903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.778101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.795 [2024-07-16 00:06:30.778110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.795 qpair failed and we were unable to recover it. 00:30:15.795 [2024-07-16 00:06:30.778299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.778309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.778680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.778689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.778885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.778896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.779242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.779253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.779620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.779629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.779989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.779998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.780356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.780366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.780730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.780739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.781110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.781120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.781483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.781492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.781845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.781854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.782042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.782051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.782248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.782257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.782615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.782624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.782981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.782990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.783354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.783363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.783727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.783736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.783976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.783984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.784244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.784254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.784577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.784586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.784950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.784961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.785194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.785204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.785564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.785573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.785942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.785951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.786311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.786321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.786714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.786723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.786973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.786982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.787213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.787222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.787596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.787606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.787960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.787969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.788326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.788335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.788695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.788704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.789057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.789066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.796 [2024-07-16 00:06:30.789428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.796 [2024-07-16 00:06:30.789437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.796 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.789832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.789842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.790194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.790203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.790629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.790639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.790947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.790956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.791131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.791141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.791456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.791467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.791829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.791838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.792027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.792036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.792330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.792339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.792722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.792732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.793064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.793073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.793409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.793419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.793617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.793627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.794011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.794020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.794290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.794299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.794648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.794657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.794794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.794804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.795249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.795259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.795662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.795671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.795933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.795942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.796156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.796166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.796567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.796576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.796938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.796947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.797315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.797325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.797724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.797733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.798025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.798033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.798397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.798407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.798788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.798798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.799156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.799165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.799512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.799522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.799885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.799894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.800267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.800276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.797 [2024-07-16 00:06:30.800618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.797 [2024-07-16 00:06:30.800627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.797 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.800815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.800824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.801169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.801178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.801536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.801546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.801907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.801916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.802234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.802244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.802480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.802490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.802847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.802856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.803233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.803242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.803458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.803467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.803810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.803820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.804146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.804155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.804545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.804555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.804742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.804751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.804936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.804945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.805290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.805299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.805694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.805703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.806037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.806046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.806184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.806194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.806532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.806541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.806744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.806753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.807109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.807118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.807328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.807349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.807737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.807747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.808117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.808127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.808326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.808336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.808524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.808534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.808889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.808899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.798 [2024-07-16 00:06:30.809275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.798 [2024-07-16 00:06:30.809284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.798 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.809474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.809483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.809849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.809858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.810068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.810077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.810323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.810333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.810670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.810680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.811032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.811041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.811396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.811405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.811591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.811600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.811802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.811812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.812202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.812211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.812518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.812527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.812889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.812898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.813277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.813287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.813354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.813363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.813738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.813747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.813968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.813977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.814183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.814192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.814566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.814575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.814944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.814954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.815314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.815324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.815684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.815695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.816056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.816065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.816431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.816441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.816825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.816834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.817156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.817165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.817512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.817522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.817877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.817886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.818079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.818088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.818444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.818453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.818810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.818819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.819206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.819215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.819609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.819618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.799 [2024-07-16 00:06:30.819976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.799 [2024-07-16 00:06:30.819985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.799 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.820356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.820366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.820573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.820582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.820951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.820960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.821161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.821170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.821351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.821361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.821725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.821734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.822094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.822103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.822291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.822301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.822549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.822558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.822936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.822945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.823308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.823317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.823683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.823692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.824055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.824064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.824425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.824435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.824740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.824749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.825152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.825162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.825369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.825379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.825442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.825452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.825750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.825760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.826115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.826124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.826324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.826333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.826708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.826717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.826929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.826938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.827313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.827322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.827522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.827531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.827782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.827791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.827982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.827991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.828224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.828236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.828584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.828594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.828803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.828812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.829190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.829199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.829561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.829571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.800 [2024-07-16 00:06:30.829936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.800 [2024-07-16 00:06:30.829945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.800 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.830308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.830318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.830696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.830705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.831058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.831067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.831436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.831445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.831676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.831685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.831887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.831896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.832251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.832261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.832639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.832648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.832966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.832975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.833348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.833358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.833739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.833748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.833947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.833956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.834326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.834336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.834650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.834659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.835041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.835050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.835406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.835416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.835627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.835635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.836115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.836124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.836470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.836479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.836837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.836846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.837217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.837226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.837445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.837455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.837676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.837688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.838008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.838018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.838415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.838425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.838757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.838766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.839214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.839223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.839567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.839577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.839764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.839773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.840024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.840033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.840404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.840413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.840752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.840761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.840973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.801 [2024-07-16 00:06:30.840982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.801 qpair failed and we were unable to recover it. 00:30:15.801 [2024-07-16 00:06:30.841227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.841239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.841584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.841594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.841958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.841967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.842338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.842347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.842712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.842722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.842910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.842919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.843115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.843124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.843520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.843529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.843740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.843751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.844123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.844132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.844397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.844406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.844770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.844779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.845096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.845105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.845442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.845452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.845809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.845818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.846176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.846185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.846533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.846545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.846914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.846923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.847277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.847286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.847466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.847475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.847805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.847815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.848154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.848163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.848527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.848536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.848907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.848917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.849279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.849288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.849648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.849657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.850030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.850039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.850282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.850292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.850639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.850648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.851005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.851014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.851416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.851425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.851831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.851840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.852137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.802 [2024-07-16 00:06:30.852146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.802 qpair failed and we were unable to recover it. 00:30:15.802 [2024-07-16 00:06:30.852518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.852527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.852917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.852927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.853280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.853290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.853672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.853681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.853880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.853890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.854243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.854253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.854455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.854465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.854699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.854708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.854949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.854958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.855328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.855338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.855555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.855567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.855909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.855918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.856315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.856324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.856533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.856542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.856958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.856967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.857324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.857334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.857714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.857723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.857911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.857920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.858154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.858163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.858510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.858520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.858876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.858885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.859096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.859105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.859316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.859326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.859512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.859521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.859888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.859898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.860334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.860343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.860738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.860747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.861093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.861102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.861458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.861467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.861837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.861846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.862186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.862194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.862555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.862564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.862931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.803 [2024-07-16 00:06:30.862940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.803 qpair failed and we were unable to recover it. 00:30:15.803 [2024-07-16 00:06:30.863308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.863317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.863526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.863535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.863738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.863747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.864142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.864151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.864499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.864509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.864720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.864729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.865070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.865079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.865447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.865456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.865829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.865838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.866197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.866205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.866407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.866417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.866664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.866673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.866878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.866887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.867123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.867132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.867394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.867404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.867837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.867847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.868194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.868203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.868320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.868329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.868704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.868713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.868918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.868927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.869102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.869111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.869481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.869490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.869846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.869855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.869913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.869921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.869988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.869997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.870381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.870390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.870725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.870734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.870946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.870955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.804 [2024-07-16 00:06:30.871338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.804 [2024-07-16 00:06:30.871347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.804 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.871705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.871714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.871899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.871909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.872256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.872265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.872598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.872608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.872793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.872802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.873134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.873143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.873491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.873500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.873838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.873847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.874050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.874060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.874270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.874280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.874610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.874619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.874998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.875007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.875364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.875374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.875595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.875604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.875920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.875929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.876268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.876278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.876631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.876642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.876997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.877006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.877365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.877374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.877579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.877588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.877963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.877972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.878174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.878183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.878388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.878397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.878788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.878797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.878989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.878998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.879366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.879375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.879731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.879740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.880101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.880110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.880309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.880318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.880651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.880660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.880858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.880867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.881234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.881244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.805 [2024-07-16 00:06:30.881575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.805 [2024-07-16 00:06:30.881584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.805 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.881941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.881950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.882161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.882171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.882629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.882638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.883021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.883030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.883384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.883393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.883739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.883748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.884092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.884101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.884484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.884493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.884850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.884859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.885218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.885226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.885476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.885487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.885856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.885865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.886222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.886240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.886431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.886440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.886783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.886792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.887118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.887127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.887527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.887537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.887669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.887678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.888090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.888180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8d0000b90 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.888592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.888680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8d0000b90 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.889050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.889062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.889325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.889334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.889680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.889689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.890069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.890078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.890502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.890511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.890860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.890869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.891069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.891078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.891314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.891323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.891685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.891694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.891746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.891755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.806 qpair failed and we were unable to recover it. 00:30:15.806 [2024-07-16 00:06:30.892078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.806 [2024-07-16 00:06:30.892087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.892202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.892210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.892579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.892589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.892948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.892957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.893166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.893175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.893582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.893592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.893948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.893956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.894324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.894338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.894512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.894521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.894763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.894772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.894971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.894980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.895175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.895184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.895403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.895413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.895637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.895646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.896010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.896019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.896380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.896389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.896755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.896764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.897124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.897133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.897342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.897351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.897613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.897622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.897976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.897985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.898349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.898358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.898729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.898738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.899096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.899105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.899474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.899484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.899859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.899868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.900240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.900250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.900626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.900636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.900847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.900857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.901240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.901249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.901594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.901604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.901921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.807 [2024-07-16 00:06:30.901930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.807 qpair failed and we were unable to recover it. 00:30:15.807 [2024-07-16 00:06:30.902187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.902196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.902402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.902411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.902670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.902679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.903037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.903046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.903391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.903400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.903752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.903761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.904094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.904103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.904297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.904306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.904506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.904515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.904941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.904950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.905322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.905331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.905686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.905696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.906051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.906060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.906431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.906440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.906679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.906688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.907076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.907084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.907442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.907453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.907659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.907668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.908041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.908050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.908400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.908409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.908735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.908744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.909100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.909109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.909477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.909487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.909895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.909904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.910130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.910139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.910380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.910389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.910713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.910721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.911096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.911105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.911515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.911525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.911877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.911886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.912263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.912272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.912482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.912491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.912894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.808 [2024-07-16 00:06:30.912904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.808 qpair failed and we were unable to recover it. 00:30:15.808 [2024-07-16 00:06:30.913110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.913119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.913473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.913483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.913934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.913943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.914278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.914287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.914644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.914653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.914855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.914864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.915108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.915117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.915494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.915503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.915860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.915869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.916226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.916238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.916541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.916552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.916936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.916945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.917149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.917158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.917338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.917347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.917749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.917758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.918099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.918108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.918467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.918477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.918837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.918846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.918984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.918993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.919340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.919349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.919726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.919735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.920119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.920128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.920336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.920346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.920718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.809 [2024-07-16 00:06:30.920727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.809 qpair failed and we were unable to recover it. 00:30:15.809 [2024-07-16 00:06:30.921083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.921092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.921453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.921462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.921820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.921829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.922037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.922046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.922420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.922430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.922795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.922804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.923237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.923247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.923645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.923654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.924014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.924023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.924388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.924398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.924732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.924741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.925112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.925122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.925486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.925495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.925823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.925834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.926028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.926037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.926271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.926280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.926485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.926495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.926939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.926948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.927014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.927024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.927347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.927357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.927698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.927706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.928042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.928051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.928365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.928375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.928751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.928761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.929133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.929143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.929515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.929525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.929906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.929917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.930124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.930134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.930469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.930479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.930740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.930749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.931105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.931114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.931175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.931184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.931517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.931526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.810 qpair failed and we were unable to recover it. 00:30:15.810 [2024-07-16 00:06:30.931584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.810 [2024-07-16 00:06:30.931592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.931842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.931851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.932211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.932220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.932552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.932562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.932766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.932776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.933155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.933164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.933302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.933313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.933625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.933634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.933972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.933981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.934217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.934225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.934433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.934443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.934676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.934686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.935043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.935052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.935319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.935329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.935519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.935528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.935779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.935789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.936048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.936058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.936258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.936268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.936572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.936581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.936949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.936958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.937298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.937308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.937605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.937615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.937953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.937962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.938296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.938306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.938597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.938606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.938968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.938977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.939171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.939181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.939577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.939586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.811 [2024-07-16 00:06:30.939800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.811 [2024-07-16 00:06:30.939808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.811 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.940196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.940205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.940567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.940577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.940934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.940943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.941304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.941314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.941517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.941526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.941749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.941758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.942096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.942105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.942325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.942335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.942737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.942746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.942936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.942945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.943172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.943181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.943451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.943460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.943841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.943850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.944191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.944200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.944564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.944573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.944931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.944940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.945128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.945138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.945201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.945209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.945410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.945420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.945735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.945746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.946117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.946127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.946494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.946504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.946862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.946871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.947237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.947247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.947594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.947603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.947999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.948008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.948364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.948373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.948558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.948567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.948890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.948899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.949117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.949126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.949494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.949504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.949864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.949873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.812 [2024-07-16 00:06:30.950239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.812 [2024-07-16 00:06:30.950249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.812 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.950617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.950626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.951012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.951021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.951219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.951228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.951629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.951638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.951842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.951851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.952212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.952221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.952580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.952589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.952912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.952922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.953323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.953333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.953575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.953584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.953963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.953972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.954203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.954213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.954350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.954361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.954604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.954615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.954866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.954876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.955104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.955115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.955483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.955493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.955813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.955822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.956161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.956170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.956514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.956524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.956875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.956885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.957089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.957099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:15.813 [2024-07-16 00:06:30.957445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.813 [2024-07-16 00:06:30.957455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:15.813 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.957804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.957815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.958060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.958071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.958289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.958301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.958639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.958648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.958941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.958951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.959298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.959308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.959647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.959657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.960034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.960043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.960241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.960250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.960309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.960317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.960642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.960651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.960971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.960981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.961321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.961331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.961512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.961522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.961982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.961991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.962329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.962338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.962617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.962627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.088 qpair failed and we were unable to recover it. 00:30:16.088 [2024-07-16 00:06:30.962972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.088 [2024-07-16 00:06:30.962981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.963394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.963403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.963725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.963734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.964086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.964095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.964497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.964506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.964878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.964887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.965221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.965234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.965660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.965669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.965865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.965875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.966259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.966268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.966616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.966626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.966972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.966981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.967192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.967202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.967565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.967574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.967908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.967917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.968101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.968111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.968400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.968409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.968768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.968777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.969113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.969121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.969466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.969475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.969807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.969817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.970168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.970178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.970518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.970528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.970858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.970867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.971209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.971218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.971561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.971571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.971906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.971916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.972252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.972261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.972479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.972488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.972695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.972705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.973068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.973077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.973436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.973446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.973829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.089 [2024-07-16 00:06:30.973839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.089 qpair failed and we were unable to recover it. 00:30:16.089 [2024-07-16 00:06:30.974101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.974111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.974450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.974459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.974834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.974843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.975176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.975185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.975525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.975534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.975867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.975876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.976213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.976223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.976581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.976591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.976808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.976821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.977036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.977048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.977408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.977417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.977626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.977635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.977823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.977831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.978081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.978090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.978298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.978308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.978691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.978700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.978924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.978935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.979290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.979300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.979608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.979617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.979955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.979965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.980302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.980311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.980656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.980666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.980999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.981008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.981384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.981393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.981463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.981471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.981807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.981816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.982169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.982178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.982515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.982525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.982745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.982755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.983104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.983114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.983363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.983373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.983731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.983740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.984083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.090 [2024-07-16 00:06:30.984093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.090 qpair failed and we were unable to recover it. 00:30:16.090 [2024-07-16 00:06:30.984472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.984483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.984695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.984705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.985046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.985057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.985265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.985275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.985632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.985641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.985857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.985867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.986076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.986085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.986329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.986340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.986691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.986701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.987072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.987082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.987343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.987352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.987577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.987586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.987783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.987792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.988163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.988172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.988585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.988595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.988938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.988948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.989311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.989322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.989658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.989667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.989993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.990002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.990386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.990396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.990782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.990792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.991128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.991138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.991501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.991511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.991845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.991854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.992189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.992198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.992569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.992579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.992913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.992924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.993304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.993314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.993670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.993680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.994078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.994090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.994460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.994469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.994702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.994712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.995081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.091 [2024-07-16 00:06:30.995090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.091 qpair failed and we were unable to recover it. 00:30:16.091 [2024-07-16 00:06:30.995405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.995415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.995771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.995781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.995990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.996001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.996189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.996198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.996569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.996578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.996781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.996791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.997131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.997141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.997501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.997511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.997883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.997893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.998237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.998246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.998464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.998475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.998891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.998900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.999252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.999262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.999626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.999635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:30.999973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:30.999983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.000316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.000327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.000687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.000699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.000903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.000914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.001259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.001269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.001604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.001614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.001807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.001815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.002197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.002206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.002497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.002506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.002710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.002720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.003082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.003092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.003455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.003465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.003662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.003672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.003869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.003878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.004241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.004250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.004603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.004613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.004952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.004961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.005295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.005304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.005428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.092 [2024-07-16 00:06:31.005438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.092 qpair failed and we were unable to recover it. 00:30:16.092 [2024-07-16 00:06:31.005758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.005768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.006098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.006108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.006463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.006473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.006678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.006688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.007008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.007017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.007393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.007404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.007778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.007787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.008119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.008128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.008339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.008349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.008709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.008718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.009071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.009080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.009330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.009340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.009723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.009733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.010086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.010095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.010455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.010465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.010656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.010666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.011053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.011062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.011329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.011340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.011757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.011766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.012118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.012127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.012481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.012491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.012559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.012568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.012879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.012888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.013240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.013249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.013587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.013596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.013883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.013893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.093 qpair failed and we were unable to recover it. 00:30:16.093 [2024-07-16 00:06:31.014250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.093 [2024-07-16 00:06:31.014260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.014458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.014467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.014671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.014681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.015017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.015027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.015393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.015403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.015799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.015811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.016156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.016166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.016412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.016422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.016808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.016818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.016882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.016892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.017197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.017206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.017634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.017643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.017973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.017982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.018193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.018203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.018629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.018638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.018822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.018832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.019186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.019195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.019557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.019568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.019896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.019905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.020241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.020252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.020601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.020610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.020814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.020824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.021113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.021124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.021371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.021380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.021752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.021762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.021994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.022004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.022377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.022387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.022753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.022763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.023118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.023127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.023366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.023375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.023797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.023806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.024078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.024087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.024456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.094 [2024-07-16 00:06:31.024470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.094 qpair failed and we were unable to recover it. 00:30:16.094 [2024-07-16 00:06:31.024855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.024864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.025257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.025266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.025458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.025467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.025839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.025849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.026058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.026066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.026490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.026500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.026834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.026844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.027220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.027242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.027444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.027453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.027792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.027801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.028131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.028140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.028331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.028340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.028784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.028794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.029132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.029142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.029516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.029526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.029910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.029920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.030131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.030140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.030521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.030531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.030747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.030757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.031093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.031103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.031493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.031503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.031856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.031865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.032209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.032219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.032560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.032570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.032909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.032918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.033321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.033331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.033592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.033601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.033953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.033963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.034305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.034314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.034664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.034673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.035009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.035019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.035394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.035405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.035468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.095 [2024-07-16 00:06:31.035478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.095 qpair failed and we were unable to recover it. 00:30:16.095 [2024-07-16 00:06:31.035795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.035805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.036140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.036150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.036494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.036503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.036698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.036707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.037069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.037078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.037444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.037453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.037725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.037735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.038096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.038106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.038514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.038524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.038740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.038750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.039104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.039113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.039494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.039504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.039840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.039850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.040033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.040044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.040410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.040420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.040777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.040786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.041145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.041155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.041352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.041362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.041701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.041711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.042065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.042075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.042483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.042492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.042865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.042876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.043097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.043107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.043325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.043334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.043527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.043537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.043867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.043876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.044213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.044223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.044586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.044595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.044837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.044847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.045107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.045117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.045472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.045482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.045675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.045685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.046016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.046026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.096 qpair failed and we were unable to recover it. 00:30:16.096 [2024-07-16 00:06:31.046406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.096 [2024-07-16 00:06:31.046416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.046695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.046708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.047080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.047090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.047472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.047481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.047702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.047711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.048145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.048154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.048522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.048532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.048853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.048863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.049065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.049075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.049443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.049452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.049877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.049887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.050112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.050122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.050563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.050573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.050788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.050799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.051184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.051195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.051542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.051552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.051929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.051938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.052296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.052305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.052695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.052705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.052886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.052895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.053216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.053226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.053583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.053593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.054031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.054041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.054346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.054356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.054743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.054753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.055119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.055129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.055320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.055332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.055672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.055681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.056041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.056053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.056436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.056446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.056807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.056816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.056989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.056999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.057191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.057201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.097 [2024-07-16 00:06:31.057568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.097 [2024-07-16 00:06:31.057578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.097 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.057942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.057951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.058333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.058344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.058555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.058565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.058705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.058715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.058960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.058971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.059262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.059272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.059550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.059559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.059920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.059930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.060281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.060291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.060630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.060639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.061002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.061012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.061369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.061378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.061720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.061730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.062014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.062024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.062412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.062422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.062787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.062797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.063151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.063161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.063508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.063518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.063877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.063886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.064231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.064242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.064536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.064545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.064740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.064752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.064965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.064975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.065289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.065299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.065653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.065663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.066038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.066048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.066452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.066462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.066656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.066666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.066984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.066994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.067371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.067380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.067739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.067748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.068082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.068093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.068314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.098 [2024-07-16 00:06:31.068324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.098 qpair failed and we were unable to recover it. 00:30:16.098 [2024-07-16 00:06:31.068550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.068560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.068731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.068741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.068934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.068944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.069046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.069055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.069633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.069721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8d0000b90 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.070196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.070250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8d0000b90 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.070649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.070679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb8d0000b90 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.070935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.070946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.071256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.071267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.071492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.071502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.071882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.071892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.072273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.072283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.072472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.072482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.072857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.072868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.073242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.073252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.073476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.073486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.073867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.073877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.074255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.074265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.074464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.074474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.074720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.074731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.075042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.075052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.075420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.075430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.075787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.075797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.076140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.076150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.076217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.076227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.076566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.076576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.076814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.076824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.077209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.099 [2024-07-16 00:06:31.077219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.099 qpair failed and we were unable to recover it. 00:30:16.099 [2024-07-16 00:06:31.077634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.077645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.077721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.077732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.077940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.077950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.078273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.078284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.078559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.078569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.078784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.078794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.079121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.079131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.079495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.079506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.079858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.079868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.080286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.080296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.080684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.080694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.081073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.081083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.081146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.081156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.081468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.081479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.081859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.081869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.082256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.082267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.082487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.082497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.082853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.082863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.083247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.083257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.083646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.083657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.084039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.084049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.084403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.084413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.084621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.084630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.084791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.084800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.084946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.084954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.085301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.085310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.085654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.085664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.086038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.086047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.086384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.086396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.086740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.086749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.086945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.086956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.087293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.087303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.087651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.100 [2024-07-16 00:06:31.087660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.100 qpair failed and we were unable to recover it. 00:30:16.100 [2024-07-16 00:06:31.087900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.087909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.088287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.088297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.088350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.088358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.088475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.088484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.088895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.088904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.088968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.088977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.089288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.089297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.089649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.089658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.089864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.089874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.090235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.090244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.090502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.090511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.090721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.090731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.091150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.091160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.091524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.091534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.091727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.091737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.091824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.091835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.092145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.092154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.092513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.092522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.092888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.092898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.093154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.093164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.093523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.093532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.093876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.093886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.094298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.094310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.094653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.094663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.094924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.094934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.095116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.095126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.095493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.095503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.095875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.095884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.096075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.096086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.096458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.096468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.096805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.096814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.097015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.097024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.097385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.101 [2024-07-16 00:06:31.097395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.101 qpair failed and we were unable to recover it. 00:30:16.101 [2024-07-16 00:06:31.097818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.097827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.098172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.098181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.098523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.098532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.098896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.098905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.099241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.099251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.099603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.099612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.099750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.099759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.100022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.100032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.100288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.100297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.100506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.100515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.100897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.100907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.101201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.101211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.101567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.101578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.101781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.101790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.102027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.102036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.102473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.102483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.102817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.102828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.103028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.103037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.103402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.103412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.103625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.103645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.104064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.104073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.104438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.104448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.104793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.104802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.105048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.105058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.105408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.105417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.105696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.105706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.106038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.106048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.106425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.106435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.106765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.106774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.107187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.107198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.107553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.107563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.107916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.107925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.108300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.108310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.102 qpair failed and we were unable to recover it. 00:30:16.102 [2024-07-16 00:06:31.108660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.102 [2024-07-16 00:06:31.108669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.108886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.108896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.109203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.109213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.109585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.109595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.109838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.109847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.110211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.110221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.110593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.110602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.110974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.110984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.111169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.111180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.111499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.111509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.111841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.111850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.112178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.112187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.112523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.112532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.112865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.112875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.113262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.113272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.113484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.113494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.113823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.113833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.114036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.114047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.114391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.114401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.114804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.114813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.115154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.115163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.115355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.115365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.115733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.115742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.116113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.116122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.116550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.116559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.116813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.116823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.117013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.117024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.117363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.117372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.117763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.117773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.118111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.118121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.118317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.118327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.118519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.118528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.118729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.118740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.103 [2024-07-16 00:06:31.119117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.103 [2024-07-16 00:06:31.119126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.103 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.119266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.119284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.119628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.119638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.119881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.119890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.120086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.120095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.120568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.120578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.120911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.120921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.121247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.121256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.121435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.121454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.121716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.121727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.122096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.122105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.122352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.122362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.122738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.122747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.123153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.123163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.123495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.123505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.123884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.123893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.124080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.124090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.124293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.124303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.124736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.124748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.124930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.124940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.125320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.125330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.125710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.125719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.126060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.126069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.126412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.126422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.126757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.126767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.126942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.126951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.127398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.127408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.127783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.127792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.128119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.104 [2024-07-16 00:06:31.128128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.104 qpair failed and we were unable to recover it. 00:30:16.104 [2024-07-16 00:06:31.128521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.128532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.128904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.128914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.129276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.129286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.129658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.129667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.130000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.130009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.130379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.130388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.130740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.130751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.130944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.130955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.131138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.131148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.131517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.131527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.131896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.131906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.132109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.132118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.132463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.132473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.132853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.132863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.133087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.133097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.133481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.133491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.133835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.133846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.134046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.134056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.134443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.134452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.134791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.134800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.135169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.135178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.135510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.135520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.135702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.135713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.136076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.136085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.136280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.136290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.136658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.136668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.137040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.137049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.137437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.137450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.137803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.137814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.138018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.138028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.138393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.138403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.138761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.138771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.139109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.105 [2024-07-16 00:06:31.139118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.105 qpair failed and we were unable to recover it. 00:30:16.105 [2024-07-16 00:06:31.139374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.139385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.139593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.139602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.139788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.139797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.140190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.140200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.140485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.140494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.140848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.140858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.141236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.141247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.141444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.141453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.141829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.141838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.142219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.142228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.142620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.142630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.142995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.143005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.143094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.143103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.143285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.143295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.143678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.143687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.143866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.143876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.144263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.144273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.144491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.144501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.144870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.144879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.145216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.145226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.145449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.145458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.145820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.145830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.146203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.146213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.146567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.146577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.146927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.146936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.147143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.147154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.147523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.147532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.147867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.147877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.148204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.148213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.148615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.148625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.149028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.149038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.149375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.149385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.149570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.149579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.106 qpair failed and we were unable to recover it. 00:30:16.106 [2024-07-16 00:06:31.149839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.106 [2024-07-16 00:06:31.149849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.150157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.150167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.150567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.150577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.150907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.150916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.151249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.151259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.151456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.151465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.151532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.151542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.151891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.151900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.152240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.152250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.152477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.152487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.152871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.152881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.153242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.153253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.153602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.153612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.153941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.153951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.154160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.154170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.154510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.154519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.154724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.154733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.154799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.154809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.155088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.155102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.155448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.155458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.155643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.155652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.155998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.156007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.156207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.156217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.156576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.156586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.156918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.156928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.157263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.157272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.157463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.157472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.157799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.157809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.158147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.158156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.158562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.158571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.158902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.158912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.159249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.159259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.159319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.159328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.107 qpair failed and we were unable to recover it. 00:30:16.107 [2024-07-16 00:06:31.159661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.107 [2024-07-16 00:06:31.159671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.160015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.160025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.160386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.160396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.160733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.160744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.161070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.161079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.161265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.161275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.161677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.161686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.161891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.161901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.162109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.162118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.162477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.162487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.162773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.162783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.162983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.162993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.163221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.163237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.163615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.163625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.164016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.164025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.164344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.164353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.164753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.164762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.165112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.165122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.165485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.165495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.165837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.165847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.166249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.166258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.166460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.166469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.166805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.166815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.167012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.167021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.167284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.167293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.167595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.167605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.167950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.167959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.168291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.168302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.168550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.168559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.168906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.168915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.169223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.169235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.169573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.169583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.169949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.169958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.108 [2024-07-16 00:06:31.170316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.108 [2024-07-16 00:06:31.170326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.108 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.170678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.170687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.170849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.170858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.171205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.171215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.171483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.171493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.171823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.171832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.172238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.172251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.172586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.172595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.172948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.172958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.173306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.173316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.173681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.173690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.173901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.173910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.174126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.174136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.174502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.174511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.174954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.174964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.175298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.175308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.175650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.175659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.175993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.176002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.176206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.176215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.176577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.176587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.176977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.176987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.177241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.177251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.177600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.177610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.177942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.177952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.178279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.178289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.178655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.178665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.179064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.179074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.179311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.109 [2024-07-16 00:06:31.179320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.109 qpair failed and we were unable to recover it. 00:30:16.109 [2024-07-16 00:06:31.179521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.179532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.179807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.179817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.180176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.180187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.180432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.180441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.180791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.180800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.180993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.181004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.181211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.181220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.181599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.181609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.181944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.181953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.182287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.182297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.182589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.182598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.182681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.182691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.183026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.183036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.183370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.183380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.183725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.183734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.184122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.184131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.184476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.184486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.184692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.184704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.184904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.184915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.185280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.185290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.185626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.185636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.185965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.185974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.186335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.186345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.186738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.186749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.187081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.187091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.187431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.187441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.187797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.187807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.188181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.188190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.188257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.188268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.188580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.188589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.188922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.188931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.189263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.189272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.189444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.189454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.110 qpair failed and we were unable to recover it. 00:30:16.110 [2024-07-16 00:06:31.189875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.110 [2024-07-16 00:06:31.189884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.190223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.190241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.190668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.190677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.190860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.190871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.190938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.190946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.191320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.191330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.191546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.191556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.191933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.191942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.192269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.192278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.192511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.192520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.192857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.192866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.193049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.193060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.193415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.193425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.193762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.193774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.193983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.193991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.194240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.194252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.194612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.194621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.194961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.194970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.195304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.195314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.195778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.195788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.195977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.195987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.196186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.196196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.196537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.196547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.196915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.196925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.197325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.197335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.197671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.197680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.198024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.198033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.198381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.198391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.198724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.198733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.198940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.198949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.199351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.199361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.199693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.199702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.200048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.200057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.111 [2024-07-16 00:06:31.200257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.111 [2024-07-16 00:06:31.200267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.111 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.200630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.200639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.201009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.201018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.201276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.201286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.201644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.201654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.201988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.201999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.202066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.202076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.202388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.202401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.202752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.202762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.203097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.203106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.203450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.203460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.203658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.203669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.204074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.204083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.204422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.204432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.204783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.204792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.205143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.205152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.205482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.205491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.205696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.205705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.206060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.206070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.206442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.206453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.206779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.206789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.207036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.207046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.207251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.207261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.207429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.207439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.207628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.207637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.207968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.207977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.208317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.208327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.208686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.208696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.208885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.208895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.209115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.209126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.209404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.209414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.209680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.209690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.209921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.209930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.210309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.112 [2024-07-16 00:06:31.210318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.112 qpair failed and we were unable to recover it. 00:30:16.112 [2024-07-16 00:06:31.210456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.210465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.210741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.210751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.210954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.210964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.211172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.211183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.211531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.211541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.211727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.211736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.212102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.212111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.212486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.212496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.212779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.212789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.213131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.213140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.213366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.213377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.213716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.213725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.214061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.214070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.214400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.214409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.214641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.214651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.214999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.215008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.215345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.215354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.215710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.215719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.216044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.216054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.216382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.216392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.216654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.216663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.216834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.216845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.217176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.217185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.217378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.217387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.217751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.217761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.217965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.217975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.218278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.218288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.218722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.218731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.218980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.218989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.219355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.219364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.219740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.219750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.219946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.219957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.220309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.113 [2024-07-16 00:06:31.220318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-16 00:06:31.220519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.220528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.220985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.220995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.221357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.221367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.221742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.221752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.222090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.222100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.222433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.222443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.222720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.222730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.223096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.223106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.223309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.223321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.223628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.223638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.223873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.223884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.224244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.224254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.224586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.224595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.224939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.224948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.225193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.225202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.225409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.225419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.225843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.225853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.226054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.226065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.226425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.226436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.226646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.226657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.226860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.226871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.227217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.227227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.227307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.227315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.227649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.227658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.227993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.228002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.228374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.228384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.228644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.228653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.228900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.228910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.229258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.229267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.229510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.229520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.229881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.229890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.230272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.114 [2024-07-16 00:06:31.230281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-16 00:06:31.230652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.230662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.230859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.230870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.231201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.231210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.231548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.231560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.231763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.231774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.232115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.232125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.232382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.232392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.232741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.232750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.233078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.233088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.233419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.233428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.233768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.233777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.234150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.234160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.234523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.234534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.234872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.234882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.235290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.235301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.235649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.235658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.236006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.236015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.236346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.236355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.236572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.236581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.236933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.236942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.237236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.237246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.237528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.237539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.237923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.237933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.238192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.238202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.238557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.238568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.238943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.238953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.239319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.239330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.239760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.239770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.240119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.240128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-16 00:06:31.240508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.115 [2024-07-16 00:06:31.240518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.240731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.240744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.240983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.241003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.241432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.241442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.241697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.241707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.242084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.242093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.242428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.242437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.242633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.242642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.242896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.242905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.243272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.243282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.243584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.243594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.243922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.243931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.244179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.244190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.244545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.244555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.244937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.244946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.245293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.245304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.245690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.245699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.246011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.246020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.246190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.246198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.246533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.246544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.246698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.246708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.247073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.247083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.247344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.247356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.247495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.247504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.247850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.247860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.248168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.248179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.248335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.248345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.248674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.248683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.249035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.249044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.249331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.249341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.249696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.249705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.250127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.250138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.250343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.250353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.116 [2024-07-16 00:06:31.250763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.116 [2024-07-16 00:06:31.250774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.116 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.251200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.251210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.251617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.251627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.251874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.251885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.252238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.252248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.252638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.252648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.252976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.252986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.253194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.253205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.253509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.253519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.253788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.253799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.254157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.254167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.254564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.254574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.254780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.254790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.255102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.255112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.255301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.255312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.255653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.255662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.255863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.255873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.256251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.256261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.256606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.256616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.256859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.256869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.257186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.257195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.257596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.257605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.257804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.257814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.258063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.258074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.258404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.258414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.258800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.258812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.259147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.259157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.259432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.259442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.259799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.259809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.260117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.260126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.260209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.260219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.260395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.260406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.260612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.260622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.117 [2024-07-16 00:06:31.260849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.117 [2024-07-16 00:06:31.260859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.117 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.261241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.261252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.261613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.261623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.261957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.261969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.262303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.262313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.262676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.262684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.263020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.263029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.263273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.263283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.263496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.263506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.263736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.263748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.118 [2024-07-16 00:06:31.264115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.118 [2024-07-16 00:06:31.264125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.118 qpair failed and we were unable to recover it. 00:30:16.386 [2024-07-16 00:06:31.264598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.386 [2024-07-16 00:06:31.264609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.386 qpair failed and we were unable to recover it. 00:30:16.386 [2024-07-16 00:06:31.264815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.386 [2024-07-16 00:06:31.264825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.386 qpair failed and we were unable to recover it. 00:30:16.386 [2024-07-16 00:06:31.264888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.386 [2024-07-16 00:06:31.264897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.386 qpair failed and we were unable to recover it. 00:30:16.386 [2024-07-16 00:06:31.265239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.386 [2024-07-16 00:06:31.265249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.386 qpair failed and we were unable to recover it. 00:30:16.386 [2024-07-16 00:06:31.265477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.386 [2024-07-16 00:06:31.265487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.386 qpair failed and we were unable to recover it. 00:30:16.386 [2024-07-16 00:06:31.265778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.386 [2024-07-16 00:06:31.265788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.386 qpair failed and we were unable to recover it. 00:30:16.386 [2024-07-16 00:06:31.266125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.266134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.266224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.266236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.266586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.266596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.266953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.266964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.267350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.267360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.267695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.267705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.267926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.267935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.268282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.268292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.268548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.268558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.268923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.268932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.269222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.269234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.269453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.269463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.269671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.269681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.270038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.270049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.270311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.270320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.270507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.270517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.270708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.270717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.271030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.271039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.271404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.271413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.271619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.271629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.272015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.272025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.272386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.272396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.272764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.272774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.272983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.272994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.273378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.273388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.273584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.273594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.273962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.273971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.274160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.274169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.274512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.274523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.274877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.274888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.275239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.275249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.275483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.387 [2024-07-16 00:06:31.275493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.387 qpair failed and we were unable to recover it. 00:30:16.387 [2024-07-16 00:06:31.275709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.275718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.275923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.275932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.276177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.276186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.276427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.276437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.276649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.276659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.277029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.277039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.277386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.277396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.277589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.277598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.277651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.277661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.278003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.278012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.278343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.278353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.278758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.278768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.279148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.279157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.279542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.279553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.279732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.279742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.279969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.279979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.280321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.280330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.280693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.280703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.281042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.281051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.281111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.281120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.281531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.281541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.281747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.281756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.282115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.282125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.282467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.282476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.282817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.282826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.283151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.283161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.283553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.283564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.283922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.283932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.284329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.284340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.284545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.284555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.284892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.284901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.285305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.285314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.388 [2024-07-16 00:06:31.285687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.388 [2024-07-16 00:06:31.285697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.388 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.286072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.286414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.286424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.286800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.286810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.287166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.287176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.287550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.287560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.287764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.287774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.287956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.287965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.288224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.288239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.288526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.288536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.288900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.288910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.289172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.289182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.289571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.289582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.289769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.289778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.290029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.290038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.290486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.290496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.290682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.290691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.290789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.290799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.291016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.291026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.291299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.291308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.291631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.291640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.291838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.291847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.292070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.292080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.292415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.292425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.292896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.292905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.293113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.293123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.293334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.293344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.293775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.293785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.294145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.294154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.294398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.294408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.294627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.294637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.294958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.294967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.295212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.389 [2024-07-16 00:06:31.295221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.389 qpair failed and we were unable to recover it. 00:30:16.389 [2024-07-16 00:06:31.295576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.295586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.295926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.295935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.296276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.296286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.296515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.296524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.296888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.296897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.297273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.297284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.297683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.297693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.298071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.298084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.298458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.298468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.298918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.298927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.299237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.299247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.299511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.299521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.299748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.299757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.300104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.300114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.300462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.300472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.300828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.300839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.301189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.301200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.301549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.301559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.301939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.301949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.302301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.302312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.302684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.302695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.303077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.303087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.303304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.303315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.303741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.303751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.304159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.304170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.304389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.304400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.304716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.304727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.305014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.305024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.305366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.305378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.305444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.305452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.305813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.305823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.306185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.306196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.390 qpair failed and we were unable to recover it. 00:30:16.390 [2024-07-16 00:06:31.306496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.390 [2024-07-16 00:06:31.306507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.306858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.306869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.307268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.307279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.307503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.307513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.307886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.307897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.308247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.308258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.308599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.308611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.308679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.308687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.309001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.309011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.309481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.309492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.309845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.309856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.310214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.310224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.310584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.310595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.310948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.310960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.311307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.311318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.311761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.311772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.312118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.312129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.312283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.312295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.312733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.312745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.312909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.312918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.313157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.313167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.313446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.313457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.313810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.313821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.314170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.314180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.314529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.314540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.314911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.314922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.315268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.315279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.315638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.315648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.315848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.315858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.316180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.316190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.316398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.316408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.316799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.316809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.317032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.317043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.391 qpair failed and we were unable to recover it. 00:30:16.391 [2024-07-16 00:06:31.317346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.391 [2024-07-16 00:06:31.317357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:30:16.392 [2024-07-16 00:06:31.317710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.317722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:30:16.392 [2024-07-16 00:06:31.318075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.318086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:16.392 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:16.392 [2024-07-16 00:06:31.318438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.318449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.392 [2024-07-16 00:06:31.318795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.318806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.319181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.319192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.319375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.319386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.319590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.319600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.320023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.320034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.320399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.320409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.320749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.320760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.320982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.320992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.321345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.321358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.321556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.321567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.321928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.321938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.322311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.322323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.322675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.322685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.322905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.322917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.323294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.323306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.323575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.323586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.323884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.323895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.324285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.324296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.324602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.324613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.324954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.324965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.325171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.392 [2024-07-16 00:06:31.325182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.392 qpair failed and we were unable to recover it. 00:30:16.392 [2024-07-16 00:06:31.325561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.325571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.325900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.325910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.326204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.326214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.326634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.326646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.326990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.327001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.327406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.327418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.327829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.327840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.328005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.328017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.328345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.328356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.328576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.328586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.328937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.328948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.329163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.329174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.329544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.329556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.329906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.329916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.330134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.330147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.330331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.330341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.330660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.330670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.331019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.331030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.331389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.331401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.331748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.331759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.331972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.331983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.332354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.332365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.332553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.332564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.332935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.332945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.333321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.333332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.333715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.333726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.333984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.333995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.334372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.334383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.334601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.334612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.334840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.334850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.335087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.335097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.335293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.335304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.335636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.393 [2024-07-16 00:06:31.335648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.393 qpair failed and we were unable to recover it. 00:30:16.393 [2024-07-16 00:06:31.335849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.335860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.336175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.336185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.336535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.336546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.336924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.336934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.337285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.337297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.337719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.337729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.338110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.338120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.338299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.338310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.338486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.338498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.338905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.338916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.339132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.339144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.339331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.339342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.339672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.339683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.339750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.339759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.340131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.340141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.340471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.340482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.340857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.340867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.341068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.341078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.341451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.341462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.341841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.341852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.342198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.342208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.342558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.342569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.342823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.342834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.343227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.343241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.343620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.343631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.343996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.344007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.344201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.344212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.344409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.344420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.344819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.344831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.345173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.345184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.345526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.345537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.345761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.345771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.346126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.346136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.394 qpair failed and we were unable to recover it. 00:30:16.394 [2024-07-16 00:06:31.346490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.394 [2024-07-16 00:06:31.346503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.346885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.346896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.347093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.347103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.347408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.347419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.347783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.347794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.348176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.348188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.348543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.348553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.348772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.348782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.348839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.348852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.349191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.349202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.349566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.349577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.349925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.349936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.350324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.350335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.350722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.350732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.351069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.351079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.351283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.351294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.351622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.351633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.351978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.351989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.352342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.352353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.352708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.352719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.353165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.353177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.353547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.353558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.353944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.353955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.354159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.354170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.354523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.354535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.354912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.354923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.355281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.355292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.355498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.355509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.355859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.355869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.356124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.356135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.356475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.356486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.356821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.356833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.357185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.357195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.395 [2024-07-16 00:06:31.357515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.395 [2024-07-16 00:06:31.357526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.395 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.357723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.357734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.396 [2024-07-16 00:06:31.358079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.358090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:16.396 [2024-07-16 00:06:31.358437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.358448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:16.396 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.396 [2024-07-16 00:06:31.358824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.358835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.359209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.359220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.359575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.359585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.359963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.359974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.360331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.360344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.360398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.360406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.360730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.360740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.361116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.361128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.361508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.361519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.361871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.361881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.362097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.362107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.362452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.362462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.362810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.362821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.363197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.363207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.363587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.363597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.363816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.363826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.364188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.364199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.364556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.364567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.364876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.364886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.365246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.365257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.365582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.365592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.365984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.365994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.366372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.366383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.366690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.366701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.366894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.366905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.367303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.367314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.367677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.367688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.368046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.396 [2024-07-16 00:06:31.368057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.396 qpair failed and we were unable to recover it. 00:30:16.396 [2024-07-16 00:06:31.368432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.368442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.368831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.368841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.369147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.369158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.369485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.369497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.369844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.369855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.370236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.370249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.370607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.370618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.370967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.370978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.371285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.371297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.371662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.371673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.372059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.372069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.372295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.372306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.372664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.372674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.373021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.373032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.373251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.373262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.373657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.373668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.374064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.374074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.374143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.374152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.374485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.374496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.374877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.374888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.375244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.375255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 Malloc0 00:30:16.397 [2024-07-16 00:06:31.375637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.375648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 [2024-07-16 00:06:31.376024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.376034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:16.397 [2024-07-16 00:06:31.376428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.376439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:16.397 [2024-07-16 00:06:31.376641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.376651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:16.397 [2024-07-16 00:06:31.376971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.397 [2024-07-16 00:06:31.376982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.397 qpair failed and we were unable to recover it. 00:30:16.397 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.398 [2024-07-16 00:06:31.377180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.377191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.377393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.377411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.377774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.377784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.378178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.378189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.378542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.378553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.378925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.378935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.379287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.379298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.379648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.379659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.380042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.380052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.380404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.380415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.380766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.380777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.381153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.381164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.381502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.381513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.381869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.381880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.382213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.382225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.382579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.382590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.382792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.382803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.382917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.398 [2024-07-16 00:06:31.383176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.383187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.383360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.383371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.383693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.383704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.384082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.384094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.384466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.384477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.384842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.384853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.385233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.385243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.385628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.385639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.385991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.386001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.386336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.386347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.386547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.386558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.398 qpair failed and we were unable to recover it. 00:30:16.398 [2024-07-16 00:06:31.386741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.398 [2024-07-16 00:06:31.386752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.386825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.386834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.387226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.387241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.387423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.387435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.387642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.387652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.387978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.387988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.388341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.388352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.388703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.388713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.389090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.389101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.389313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.389324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.389504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.389514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.389842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.389852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.390202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.390212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.390562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.390573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.390945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.390955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.391354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.391366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.391735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.391745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.391809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.391818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:16.399 [2024-07-16 00:06:31.392127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.392139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.392394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.392405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.392609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.392619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:16.399 [2024-07-16 00:06:31.392818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.392828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.399 [2024-07-16 00:06:31.393010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.393021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.393240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.393250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.393570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.393581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.393790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.393801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.394197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.394207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.394469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.394480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.394742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.394753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.395150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.395160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.395362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.395373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.399 qpair failed and we were unable to recover it. 00:30:16.399 [2024-07-16 00:06:31.395756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.399 [2024-07-16 00:06:31.395766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.396014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.396025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.396406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.396417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.396760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.396770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.397117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.397127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.397488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.397499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.397894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.397905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.398264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.398275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.398469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.398479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.398811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.398821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.399180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.399191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.399554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.399564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.399905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.399916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.400289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.400299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.400517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.400527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.400887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.400897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.401247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.401257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.401610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.401620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.401969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.401979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.402332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.402343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.402721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.402731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.403069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.403080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.403485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.403496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.403849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.403862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:16.400 [2024-07-16 00:06:31.404210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.404221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:16.400 [2024-07-16 00:06:31.404422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.404433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.404503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.404511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:16.400 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.400 [2024-07-16 00:06:31.404842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.404854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.400 [2024-07-16 00:06:31.405210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.400 [2024-07-16 00:06:31.405221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.400 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.405579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.405591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.405808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.405819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.406172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.406183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.406378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.406388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.406564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.406575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.406945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.406955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.407029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.407038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.407360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.407371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.407610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.407621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.407972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.407982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.408363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.408373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.408531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.408541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.408638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.408647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.408976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.408986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.409340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.409351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.409567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.409578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.409931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.409942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.410323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.410334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.410686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.410697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.411072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.411084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.411299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.411310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.411696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.411706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.411927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.411937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.412287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.412297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.412656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.412667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.413046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.413056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.413421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.413431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.413748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.413759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.414172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.414183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.414402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.414412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.414799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.414809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.415115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.415126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.401 qpair failed and we were unable to recover it. 00:30:16.401 [2024-07-16 00:06:31.415492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.401 [2024-07-16 00:06:31.415502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.415884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.415897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:16.402 [2024-07-16 00:06:31.416334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.416345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.402 [2024-07-16 00:06:31.416642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.416653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:16.402 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.402 [2024-07-16 00:06:31.417033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.417044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.417248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.417259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.417649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.417660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.417989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.417999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.418306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.418317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.418675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.418686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.419019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.419030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.419242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.419253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.419462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.419471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.419810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.419821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.420215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.420225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.420444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.420455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.420728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.420738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.421100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.421111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.421469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.421480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.421851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.421861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.422060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.422071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.422356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.422367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.422753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.422764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.423026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.402 [2024-07-16 00:06:31.423036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17e1a50 with addr=10.0.0.2, port=4420 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.402 [2024-07-16 00:06:31.423189] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.402 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:16.402 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:16.402 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:16.402 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.402 [2024-07-16 00:06:31.433743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.402 [2024-07-16 00:06:31.433830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.402 [2024-07-16 00:06:31.433850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.402 [2024-07-16 00:06:31.433858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.402 [2024-07-16 00:06:31.433865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.402 [2024-07-16 00:06:31.433884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.402 qpair failed and we were unable to recover it. 00:30:16.403 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:16.403 00:06:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 664571 00:30:16.403 [2024-07-16 00:06:31.443725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.403 [2024-07-16 00:06:31.443797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.403 [2024-07-16 00:06:31.443814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.403 [2024-07-16 00:06:31.443822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.403 [2024-07-16 00:06:31.443829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.403 [2024-07-16 00:06:31.443845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.403 qpair failed and we were unable to recover it. 00:30:16.403 [2024-07-16 00:06:31.453744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.403 [2024-07-16 00:06:31.453818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.403 [2024-07-16 00:06:31.453833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.403 [2024-07-16 00:06:31.453840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.403 [2024-07-16 00:06:31.453848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.403 [2024-07-16 00:06:31.453862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.403 qpair failed and we were unable to recover it. 00:30:16.403 [2024-07-16 00:06:31.463614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.403 [2024-07-16 00:06:31.463691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.403 [2024-07-16 00:06:31.463709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.403 [2024-07-16 00:06:31.463716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.403 [2024-07-16 00:06:31.463724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.403 [2024-07-16 00:06:31.463740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.403 qpair failed and we were unable to recover it. 00:30:16.403 [2024-07-16 00:06:31.473744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.403 [2024-07-16 00:06:31.473813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.403 [2024-07-16 00:06:31.473832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.403 [2024-07-16 00:06:31.473839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.403 [2024-07-16 00:06:31.473845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.403 [2024-07-16 00:06:31.473859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.403 qpair failed and we were unable to recover it. 00:30:16.403 [2024-07-16 00:06:31.483766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.403 [2024-07-16 00:06:31.483827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.403 [2024-07-16 00:06:31.483843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.403 [2024-07-16 00:06:31.483850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.403 [2024-07-16 00:06:31.483856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.403 [2024-07-16 00:06:31.483870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.403 qpair failed and we were unable to recover it. 00:30:16.403 [2024-07-16 00:06:31.493781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.403 [2024-07-16 00:06:31.493850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.403 [2024-07-16 00:06:31.493865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.403 [2024-07-16 00:06:31.493873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.403 [2024-07-16 00:06:31.493879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.403 [2024-07-16 00:06:31.493893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.403 qpair failed and we were unable to recover it. 00:30:16.403 [2024-07-16 00:06:31.503754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.403 [2024-07-16 00:06:31.503831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.403 [2024-07-16 00:06:31.503846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.403 [2024-07-16 00:06:31.503853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.403 [2024-07-16 00:06:31.503860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.403 [2024-07-16 00:06:31.503874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.403 qpair failed and we were unable to recover it. 00:30:16.403 [2024-07-16 00:06:31.513839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.403 [2024-07-16 00:06:31.513910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.403 [2024-07-16 00:06:31.513925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.403 [2024-07-16 00:06:31.513932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.403 [2024-07-16 00:06:31.513938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.403 [2024-07-16 00:06:31.513956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.403 qpair failed and we were unable to recover it. 00:30:16.403 [2024-07-16 00:06:31.523822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.403 [2024-07-16 00:06:31.523899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.403 [2024-07-16 00:06:31.523925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.403 [2024-07-16 00:06:31.523934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.403 [2024-07-16 00:06:31.523941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.403 [2024-07-16 00:06:31.523960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.403 qpair failed and we were unable to recover it. 00:30:16.403 [2024-07-16 00:06:31.533838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.404 [2024-07-16 00:06:31.533913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.404 [2024-07-16 00:06:31.533938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.404 [2024-07-16 00:06:31.533947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.404 [2024-07-16 00:06:31.533953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.404 [2024-07-16 00:06:31.533972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.404 qpair failed and we were unable to recover it. 00:30:16.404 [2024-07-16 00:06:31.543837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.404 [2024-07-16 00:06:31.543913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.404 [2024-07-16 00:06:31.543939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.404 [2024-07-16 00:06:31.543949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.404 [2024-07-16 00:06:31.543957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.404 [2024-07-16 00:06:31.543977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.404 qpair failed and we were unable to recover it. 00:30:16.404 [2024-07-16 00:06:31.553922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.404 [2024-07-16 00:06:31.554002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.404 [2024-07-16 00:06:31.554027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.404 [2024-07-16 00:06:31.554036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.404 [2024-07-16 00:06:31.554043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.404 [2024-07-16 00:06:31.554062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.404 qpair failed and we were unable to recover it. 00:30:16.404 [2024-07-16 00:06:31.563973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.404 [2024-07-16 00:06:31.564049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.404 [2024-07-16 00:06:31.564080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.404 [2024-07-16 00:06:31.564089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.404 [2024-07-16 00:06:31.564096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.404 [2024-07-16 00:06:31.564114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.404 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-16 00:06:31.574003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.666 [2024-07-16 00:06:31.574076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.666 [2024-07-16 00:06:31.574093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.666 [2024-07-16 00:06:31.574100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.666 [2024-07-16 00:06:31.574108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.666 [2024-07-16 00:06:31.574123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-16 00:06:31.583983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.666 [2024-07-16 00:06:31.584051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.666 [2024-07-16 00:06:31.584066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.666 [2024-07-16 00:06:31.584074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.666 [2024-07-16 00:06:31.584080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.666 [2024-07-16 00:06:31.584094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-16 00:06:31.594043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.666 [2024-07-16 00:06:31.594121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.666 [2024-07-16 00:06:31.594137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.666 [2024-07-16 00:06:31.594143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.666 [2024-07-16 00:06:31.594149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.666 [2024-07-16 00:06:31.594163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-16 00:06:31.603962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.666 [2024-07-16 00:06:31.604027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.666 [2024-07-16 00:06:31.604042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.666 [2024-07-16 00:06:31.604049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.666 [2024-07-16 00:06:31.604055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.666 [2024-07-16 00:06:31.604073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-16 00:06:31.613984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.666 [2024-07-16 00:06:31.614050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.666 [2024-07-16 00:06:31.614065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.666 [2024-07-16 00:06:31.614072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.666 [2024-07-16 00:06:31.614079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.666 [2024-07-16 00:06:31.614093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-16 00:06:31.624103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.666 [2024-07-16 00:06:31.624171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.666 [2024-07-16 00:06:31.624186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.666 [2024-07-16 00:06:31.624193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.666 [2024-07-16 00:06:31.624199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.666 [2024-07-16 00:06:31.624213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-16 00:06:31.634237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.666 [2024-07-16 00:06:31.634314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.666 [2024-07-16 00:06:31.634329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.666 [2024-07-16 00:06:31.634336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.666 [2024-07-16 00:06:31.634342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.666 [2024-07-16 00:06:31.634357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.666 [2024-07-16 00:06:31.644174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.666 [2024-07-16 00:06:31.644245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.666 [2024-07-16 00:06:31.644260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.666 [2024-07-16 00:06:31.644267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.666 [2024-07-16 00:06:31.644274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.666 [2024-07-16 00:06:31.644288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.666 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.654210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.654284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.654303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.654310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.654317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.654331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.664221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.664295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.664311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.664318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.664324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.664338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.674150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.674225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.674244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.674251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.674257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.674271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.684296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.684359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.684375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.684382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.684388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.684402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.694318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.694415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.694430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.694437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.694447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.694461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.704331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.704399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.704414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.704421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.704427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.704441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.714496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.714587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.714602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.714609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.714615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.714629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.724401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.724494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.724510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.724517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.724523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.724536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.734487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.734554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.734570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.734577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.734584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.734597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.744482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.744558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.744574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.744581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.744587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.744601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.754471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.754542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.754558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.667 [2024-07-16 00:06:31.754565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.667 [2024-07-16 00:06:31.754571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.667 [2024-07-16 00:06:31.754585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.667 qpair failed and we were unable to recover it. 00:30:16.667 [2024-07-16 00:06:31.764519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.667 [2024-07-16 00:06:31.764595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.667 [2024-07-16 00:06:31.764610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.668 [2024-07-16 00:06:31.764617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.668 [2024-07-16 00:06:31.764624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.668 [2024-07-16 00:06:31.764638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-16 00:06:31.774526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.668 [2024-07-16 00:06:31.774593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.668 [2024-07-16 00:06:31.774608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.668 [2024-07-16 00:06:31.774615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.668 [2024-07-16 00:06:31.774621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.668 [2024-07-16 00:06:31.774635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-16 00:06:31.784561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.668 [2024-07-16 00:06:31.784655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.668 [2024-07-16 00:06:31.784671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.668 [2024-07-16 00:06:31.784678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.668 [2024-07-16 00:06:31.784688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.668 [2024-07-16 00:06:31.784702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-16 00:06:31.794613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.668 [2024-07-16 00:06:31.794685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.668 [2024-07-16 00:06:31.794701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.668 [2024-07-16 00:06:31.794708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.668 [2024-07-16 00:06:31.794714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.668 [2024-07-16 00:06:31.794727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-16 00:06:31.804646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.668 [2024-07-16 00:06:31.804759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.668 [2024-07-16 00:06:31.804774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.668 [2024-07-16 00:06:31.804782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.668 [2024-07-16 00:06:31.804788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.668 [2024-07-16 00:06:31.804801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-16 00:06:31.814643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.668 [2024-07-16 00:06:31.814713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.668 [2024-07-16 00:06:31.814729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.668 [2024-07-16 00:06:31.814736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.668 [2024-07-16 00:06:31.814742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.668 [2024-07-16 00:06:31.814756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-16 00:06:31.824671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.668 [2024-07-16 00:06:31.824737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.668 [2024-07-16 00:06:31.824753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.668 [2024-07-16 00:06:31.824760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.668 [2024-07-16 00:06:31.824766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.668 [2024-07-16 00:06:31.824780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-16 00:06:31.834701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.668 [2024-07-16 00:06:31.834772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.668 [2024-07-16 00:06:31.834787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.668 [2024-07-16 00:06:31.834794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.668 [2024-07-16 00:06:31.834800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.668 [2024-07-16 00:06:31.834814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.668 [2024-07-16 00:06:31.844734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.668 [2024-07-16 00:06:31.844798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.668 [2024-07-16 00:06:31.844813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.668 [2024-07-16 00:06:31.844820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.668 [2024-07-16 00:06:31.844826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.668 [2024-07-16 00:06:31.844840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.668 qpair failed and we were unable to recover it. 00:30:16.930 [2024-07-16 00:06:31.854726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.930 [2024-07-16 00:06:31.854794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.930 [2024-07-16 00:06:31.854810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.930 [2024-07-16 00:06:31.854817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.930 [2024-07-16 00:06:31.854823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.930 [2024-07-16 00:06:31.854837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.930 qpair failed and we were unable to recover it. 00:30:16.930 [2024-07-16 00:06:31.864843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.930 [2024-07-16 00:06:31.864910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.930 [2024-07-16 00:06:31.864925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.930 [2024-07-16 00:06:31.864933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.930 [2024-07-16 00:06:31.864939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.930 [2024-07-16 00:06:31.864953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.930 qpair failed and we were unable to recover it. 00:30:16.930 [2024-07-16 00:06:31.874696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.930 [2024-07-16 00:06:31.874771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.930 [2024-07-16 00:06:31.874786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.930 [2024-07-16 00:06:31.874794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.930 [2024-07-16 00:06:31.874804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.930 [2024-07-16 00:06:31.874818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.930 qpair failed and we were unable to recover it. 00:30:16.930 [2024-07-16 00:06:31.884832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.930 [2024-07-16 00:06:31.884952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.930 [2024-07-16 00:06:31.884968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.930 [2024-07-16 00:06:31.884975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.930 [2024-07-16 00:06:31.884981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.930 [2024-07-16 00:06:31.884996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.930 qpair failed and we were unable to recover it. 00:30:16.930 [2024-07-16 00:06:31.894912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.930 [2024-07-16 00:06:31.894990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.895005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.895012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.895019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.895033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.904868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.904941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.904967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.904976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.904983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.905002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.914938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.915012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.915037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.915046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.915053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.915072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.924973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.925049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.925075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.925084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.925091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.925110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.934983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.935049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.935066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.935073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.935080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.935096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.944989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.945055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.945071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.945078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.945084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.945098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.955033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.955114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.955130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.955137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.955144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.955158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.965060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.965124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.965140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.965151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.965158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.965173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.974977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.975049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.975065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.975072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.975079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.975094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.985114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.985193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.985209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.985216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.985222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.985241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:31.995144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:31.995223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:31.995241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:31.995249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:31.995255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:31.995269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.931 qpair failed and we were unable to recover it. 00:30:16.931 [2024-07-16 00:06:32.005063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.931 [2024-07-16 00:06:32.005133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.931 [2024-07-16 00:06:32.005149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.931 [2024-07-16 00:06:32.005156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.931 [2024-07-16 00:06:32.005163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.931 [2024-07-16 00:06:32.005177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.015108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.015173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.015188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.015195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.015201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.015215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.025191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.025262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.025278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.025285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.025291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.025305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.035260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.035334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.035349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.035356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.035362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.035376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.045279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.045344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.045359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.045366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.045372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.045386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.055299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.055373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.055388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.055399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.055405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.055419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.065335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.065404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.065419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.065426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.065432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.065447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.075356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.075426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.075441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.075448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.075454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.075468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.085311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.085377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.085394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.085401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.085407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.085421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.095438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.095505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.095520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.095527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.095533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.095547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.105475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.105541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.105556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.105563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.105569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.105583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:16.932 [2024-07-16 00:06:32.115535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.932 [2024-07-16 00:06:32.115604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.932 [2024-07-16 00:06:32.115618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.932 [2024-07-16 00:06:32.115625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.932 [2024-07-16 00:06:32.115631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:16.932 [2024-07-16 00:06:32.115645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.932 qpair failed and we were unable to recover it. 00:30:17.194 [2024-07-16 00:06:32.125509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.194 [2024-07-16 00:06:32.125573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.194 [2024-07-16 00:06:32.125588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.194 [2024-07-16 00:06:32.125595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.194 [2024-07-16 00:06:32.125601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.194 [2024-07-16 00:06:32.125614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.194 qpair failed and we were unable to recover it. 00:30:17.194 [2024-07-16 00:06:32.135559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.194 [2024-07-16 00:06:32.135626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.194 [2024-07-16 00:06:32.135641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.194 [2024-07-16 00:06:32.135649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.194 [2024-07-16 00:06:32.135655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.194 [2024-07-16 00:06:32.135669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.194 qpair failed and we were unable to recover it. 00:30:17.194 [2024-07-16 00:06:32.145596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.194 [2024-07-16 00:06:32.145663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.194 [2024-07-16 00:06:32.145683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.194 [2024-07-16 00:06:32.145690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.194 [2024-07-16 00:06:32.145697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.194 [2024-07-16 00:06:32.145711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.194 qpair failed and we were unable to recover it. 00:30:17.194 [2024-07-16 00:06:32.155612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.194 [2024-07-16 00:06:32.155683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.194 [2024-07-16 00:06:32.155699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.194 [2024-07-16 00:06:32.155706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.194 [2024-07-16 00:06:32.155712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.194 [2024-07-16 00:06:32.155727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.194 qpair failed and we were unable to recover it. 00:30:17.194 [2024-07-16 00:06:32.165621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.165686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.165701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.165708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.165715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.165728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.175632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.175696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.175711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.175718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.175724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.175738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.185672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.185746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.185762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.185768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.185775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.185788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.195720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.195786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.195801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.195808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.195814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.195828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.205743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.205809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.205823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.205830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.205836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.205850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.215771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.215833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.215848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.215855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.215861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.215874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.225699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.225767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.225783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.225790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.225796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.225809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.235787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.235863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.235882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.235889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.235895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.235908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.245870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.245945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.245971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.245979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.245986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.246005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.255884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.255958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.255983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.255992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.255999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.256018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.265849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.265922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.195 [2024-07-16 00:06:32.265947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.195 [2024-07-16 00:06:32.265956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.195 [2024-07-16 00:06:32.265963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.195 [2024-07-16 00:06:32.265981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.195 qpair failed and we were unable to recover it. 00:30:17.195 [2024-07-16 00:06:32.275823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.195 [2024-07-16 00:06:32.275901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.275918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.275925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.275931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.275951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.285981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.286056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.286081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.286090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.286097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.286116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.295999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.296063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.296079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.296086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.296092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.296107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.306028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.306094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.306111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.306118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.306124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.306138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.316048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.316116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.316131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.316138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.316144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.316158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.326087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.326146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.326165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.326173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.326179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.326193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.336104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.336171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.336186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.336193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.336199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.336213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.346132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.346196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.346211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.346218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.346224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.346242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.356148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.356217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.356236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.356244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.356250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.356264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.366141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.366205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.366221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.366228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.366240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.366258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.196 [2024-07-16 00:06:32.376216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.196 [2024-07-16 00:06:32.376286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.196 [2024-07-16 00:06:32.376302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.196 [2024-07-16 00:06:32.376309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.196 [2024-07-16 00:06:32.376316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.196 [2024-07-16 00:06:32.376330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.196 qpair failed and we were unable to recover it. 00:30:17.459 [2024-07-16 00:06:32.386254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.459 [2024-07-16 00:06:32.386323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.459 [2024-07-16 00:06:32.386338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.459 [2024-07-16 00:06:32.386345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.459 [2024-07-16 00:06:32.386352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.459 [2024-07-16 00:06:32.386365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.459 qpair failed and we were unable to recover it. 00:30:17.459 [2024-07-16 00:06:32.396281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.459 [2024-07-16 00:06:32.396358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.459 [2024-07-16 00:06:32.396372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.459 [2024-07-16 00:06:32.396379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.459 [2024-07-16 00:06:32.396386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.459 [2024-07-16 00:06:32.396400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.459 qpair failed and we were unable to recover it. 00:30:17.459 [2024-07-16 00:06:32.406300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.459 [2024-07-16 00:06:32.406366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.459 [2024-07-16 00:06:32.406381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.459 [2024-07-16 00:06:32.406388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.459 [2024-07-16 00:06:32.406394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.459 [2024-07-16 00:06:32.406408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.459 qpair failed and we were unable to recover it. 00:30:17.459 [2024-07-16 00:06:32.416335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.459 [2024-07-16 00:06:32.416402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.459 [2024-07-16 00:06:32.416420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.459 [2024-07-16 00:06:32.416428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.459 [2024-07-16 00:06:32.416434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.459 [2024-07-16 00:06:32.416447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.459 qpair failed and we were unable to recover it. 00:30:17.459 [2024-07-16 00:06:32.426335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.459 [2024-07-16 00:06:32.426402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.459 [2024-07-16 00:06:32.426418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.459 [2024-07-16 00:06:32.426425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.459 [2024-07-16 00:06:32.426431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.459 [2024-07-16 00:06:32.426445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.459 qpair failed and we were unable to recover it. 00:30:17.459 [2024-07-16 00:06:32.436380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.459 [2024-07-16 00:06:32.436447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.459 [2024-07-16 00:06:32.436462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.459 [2024-07-16 00:06:32.436470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.459 [2024-07-16 00:06:32.436476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.459 [2024-07-16 00:06:32.436489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.459 qpair failed and we were unable to recover it. 00:30:17.459 [2024-07-16 00:06:32.446404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.459 [2024-07-16 00:06:32.446465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.459 [2024-07-16 00:06:32.446480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.459 [2024-07-16 00:06:32.446487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.459 [2024-07-16 00:06:32.446492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.459 [2024-07-16 00:06:32.446506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.459 qpair failed and we were unable to recover it. 00:30:17.459 [2024-07-16 00:06:32.456451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.459 [2024-07-16 00:06:32.456524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.456539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.456546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.456556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.456569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.466548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.466641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.466656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.466663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.466670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.466684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.476382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.476460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.476475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.476482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.476488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.476502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.486427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.486493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.486508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.486515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.486521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.486535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.496425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.496489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.496504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.496511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.496517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.496531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.506592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.506666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.506682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.506689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.506695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.506708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.516596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.516676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.516691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.516698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.516705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.516719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.526629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.526693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.526708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.526715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.526721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.526734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.536662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.536826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.536842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.536849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.536855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.536868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.546677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.546743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.546757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.546765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.546774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.546788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.556705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.556773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.556788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.556795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.556801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.460 [2024-07-16 00:06:32.556815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.460 qpair failed and we were unable to recover it. 00:30:17.460 [2024-07-16 00:06:32.566729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.460 [2024-07-16 00:06:32.566798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.460 [2024-07-16 00:06:32.566813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.460 [2024-07-16 00:06:32.566820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.460 [2024-07-16 00:06:32.566826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.461 [2024-07-16 00:06:32.566841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.461 qpair failed and we were unable to recover it. 00:30:17.461 [2024-07-16 00:06:32.576749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.461 [2024-07-16 00:06:32.576818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.461 [2024-07-16 00:06:32.576833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.461 [2024-07-16 00:06:32.576840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.461 [2024-07-16 00:06:32.576846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.461 [2024-07-16 00:06:32.576861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.461 qpair failed and we were unable to recover it. 00:30:17.461 [2024-07-16 00:06:32.586810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.461 [2024-07-16 00:06:32.586877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.461 [2024-07-16 00:06:32.586892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.461 [2024-07-16 00:06:32.586899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.461 [2024-07-16 00:06:32.586905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.461 [2024-07-16 00:06:32.586919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.461 qpair failed and we were unable to recover it. 00:30:17.461 [2024-07-16 00:06:32.596829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.461 [2024-07-16 00:06:32.596938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.461 [2024-07-16 00:06:32.596953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.461 [2024-07-16 00:06:32.596960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.461 [2024-07-16 00:06:32.596966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.461 [2024-07-16 00:06:32.596980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.461 qpair failed and we were unable to recover it. 00:30:17.461 [2024-07-16 00:06:32.606870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.461 [2024-07-16 00:06:32.606940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.461 [2024-07-16 00:06:32.606965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.461 [2024-07-16 00:06:32.606974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.461 [2024-07-16 00:06:32.606980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.461 [2024-07-16 00:06:32.606998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.461 qpair failed and we were unable to recover it. 00:30:17.461 [2024-07-16 00:06:32.616867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.461 [2024-07-16 00:06:32.616935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.461 [2024-07-16 00:06:32.616960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.461 [2024-07-16 00:06:32.616969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.461 [2024-07-16 00:06:32.616976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.461 [2024-07-16 00:06:32.616995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.461 qpair failed and we were unable to recover it. 00:30:17.461 [2024-07-16 00:06:32.626893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.461 [2024-07-16 00:06:32.626965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.461 [2024-07-16 00:06:32.626990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.461 [2024-07-16 00:06:32.626999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.461 [2024-07-16 00:06:32.627005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.461 [2024-07-16 00:06:32.627024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.461 qpair failed and we were unable to recover it. 00:30:17.461 [2024-07-16 00:06:32.636919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.461 [2024-07-16 00:06:32.637002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.461 [2024-07-16 00:06:32.637027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.461 [2024-07-16 00:06:32.637036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.461 [2024-07-16 00:06:32.637047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.461 [2024-07-16 00:06:32.637065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.461 qpair failed and we were unable to recover it. 00:30:17.461 [2024-07-16 00:06:32.646947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.461 [2024-07-16 00:06:32.647024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.461 [2024-07-16 00:06:32.647049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.461 [2024-07-16 00:06:32.647058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.461 [2024-07-16 00:06:32.647064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.461 [2024-07-16 00:06:32.647083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.461 qpair failed and we were unable to recover it. 00:30:17.724 [2024-07-16 00:06:32.656978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.724 [2024-07-16 00:06:32.657041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.724 [2024-07-16 00:06:32.657059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.724 [2024-07-16 00:06:32.657066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.724 [2024-07-16 00:06:32.657072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.724 [2024-07-16 00:06:32.657088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.724 qpair failed and we were unable to recover it. 00:30:17.724 [2024-07-16 00:06:32.667057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.724 [2024-07-16 00:06:32.667127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.724 [2024-07-16 00:06:32.667143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.724 [2024-07-16 00:06:32.667150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.724 [2024-07-16 00:06:32.667156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.724 [2024-07-16 00:06:32.667172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.724 qpair failed and we were unable to recover it. 00:30:17.724 [2024-07-16 00:06:32.676968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.724 [2024-07-16 00:06:32.677037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.724 [2024-07-16 00:06:32.677052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.724 [2024-07-16 00:06:32.677059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.724 [2024-07-16 00:06:32.677065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.724 [2024-07-16 00:06:32.677079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.724 qpair failed and we were unable to recover it. 00:30:17.724 [2024-07-16 00:06:32.687077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.724 [2024-07-16 00:06:32.687138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.724 [2024-07-16 00:06:32.687154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.724 [2024-07-16 00:06:32.687161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.724 [2024-07-16 00:06:32.687167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.724 [2024-07-16 00:06:32.687181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.724 qpair failed and we were unable to recover it. 00:30:17.724 [2024-07-16 00:06:32.696983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.724 [2024-07-16 00:06:32.697048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.724 [2024-07-16 00:06:32.697063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.724 [2024-07-16 00:06:32.697070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.724 [2024-07-16 00:06:32.697076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.724 [2024-07-16 00:06:32.697090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.724 qpair failed and we were unable to recover it. 00:30:17.724 [2024-07-16 00:06:32.707124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.724 [2024-07-16 00:06:32.707191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.707206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.707213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.707219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.707238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.717157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.717228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.717247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.717254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.717260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.717274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.727171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.727240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.727256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.727267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.727273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.727287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.737202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.737276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.737292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.737299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.737307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.737321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.747245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.747311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.747326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.747333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.747339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.747353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.757155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.757227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.757247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.757255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.757261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.757276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.767323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.767391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.767408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.767415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.767421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.767436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.777381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.777450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.777466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.777473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.777479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.777494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.787382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.787458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.787473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.787480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.787487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.787501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.797364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.797435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.797450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.797457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.797463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.797477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.807404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.807468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.807484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.807491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.807497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.807510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.725 qpair failed and we were unable to recover it. 00:30:17.725 [2024-07-16 00:06:32.817471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.725 [2024-07-16 00:06:32.817540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.725 [2024-07-16 00:06:32.817556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.725 [2024-07-16 00:06:32.817566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.725 [2024-07-16 00:06:32.817572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.725 [2024-07-16 00:06:32.817586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-07-16 00:06:32.827485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.726 [2024-07-16 00:06:32.827554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.726 [2024-07-16 00:06:32.827569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.726 [2024-07-16 00:06:32.827576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.726 [2024-07-16 00:06:32.827582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.726 [2024-07-16 00:06:32.827598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-07-16 00:06:32.837508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.726 [2024-07-16 00:06:32.837577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.726 [2024-07-16 00:06:32.837593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.726 [2024-07-16 00:06:32.837600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.726 [2024-07-16 00:06:32.837606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.726 [2024-07-16 00:06:32.837619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-07-16 00:06:32.847420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.726 [2024-07-16 00:06:32.847496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.726 [2024-07-16 00:06:32.847512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.726 [2024-07-16 00:06:32.847519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.726 [2024-07-16 00:06:32.847526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.726 [2024-07-16 00:06:32.847542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-07-16 00:06:32.857560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.726 [2024-07-16 00:06:32.857624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.726 [2024-07-16 00:06:32.857640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.726 [2024-07-16 00:06:32.857647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.726 [2024-07-16 00:06:32.857653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.726 [2024-07-16 00:06:32.857666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-07-16 00:06:32.867584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.726 [2024-07-16 00:06:32.867652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.726 [2024-07-16 00:06:32.867668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.726 [2024-07-16 00:06:32.867675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.726 [2024-07-16 00:06:32.867681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.726 [2024-07-16 00:06:32.867695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-07-16 00:06:32.877517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.726 [2024-07-16 00:06:32.877589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.726 [2024-07-16 00:06:32.877604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.726 [2024-07-16 00:06:32.877611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.726 [2024-07-16 00:06:32.877617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.726 [2024-07-16 00:06:32.877632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-07-16 00:06:32.887658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.726 [2024-07-16 00:06:32.887728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.726 [2024-07-16 00:06:32.887743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.726 [2024-07-16 00:06:32.887750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.726 [2024-07-16 00:06:32.887756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.726 [2024-07-16 00:06:32.887770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-07-16 00:06:32.897679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.726 [2024-07-16 00:06:32.897750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.726 [2024-07-16 00:06:32.897766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.726 [2024-07-16 00:06:32.897773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.726 [2024-07-16 00:06:32.897779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.726 [2024-07-16 00:06:32.897793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.726 [2024-07-16 00:06:32.907604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.726 [2024-07-16 00:06:32.907678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.726 [2024-07-16 00:06:32.907693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.726 [2024-07-16 00:06:32.907704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.726 [2024-07-16 00:06:32.907710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.726 [2024-07-16 00:06:32.907724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.726 qpair failed and we were unable to recover it. 00:30:17.988 [2024-07-16 00:06:32.917746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.988 [2024-07-16 00:06:32.917821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.988 [2024-07-16 00:06:32.917836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.988 [2024-07-16 00:06:32.917843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.988 [2024-07-16 00:06:32.917849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.988 [2024-07-16 00:06:32.917862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.988 qpair failed and we were unable to recover it. 00:30:17.988 [2024-07-16 00:06:32.927754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.988 [2024-07-16 00:06:32.927821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.988 [2024-07-16 00:06:32.927836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.988 [2024-07-16 00:06:32.927843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.988 [2024-07-16 00:06:32.927849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.988 [2024-07-16 00:06:32.927863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.988 qpair failed and we were unable to recover it. 00:30:17.988 [2024-07-16 00:06:32.937723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.988 [2024-07-16 00:06:32.937788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.988 [2024-07-16 00:06:32.937804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.988 [2024-07-16 00:06:32.937811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.988 [2024-07-16 00:06:32.937817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.988 [2024-07-16 00:06:32.937830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.988 qpair failed and we were unable to recover it. 00:30:17.988 [2024-07-16 00:06:32.947802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.988 [2024-07-16 00:06:32.947868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.988 [2024-07-16 00:06:32.947883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:32.947891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:32.947897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:32.947910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:32.957851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:32.957932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:32.957958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:32.957967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:32.957974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:32.957993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:32.967812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:32.967904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:32.967923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:32.967931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:32.967937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:32.967953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:32.977895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:32.977961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:32.977978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:32.977985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:32.977991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:32.978007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:32.987935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:32.987999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:32.988014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:32.988022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:32.988028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:32.988043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:32.997997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:32.998093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:32.998113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:32.998120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:32.998127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:32.998140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:33.007988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:33.008053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:33.008068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:33.008075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:33.008081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:33.008096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:33.017998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:33.018063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:33.018078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:33.018086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:33.018092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:33.018106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:33.028050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:33.028118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:33.028133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:33.028140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:33.028146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:33.028160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:33.038087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:33.038165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:33.038180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:33.038187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:33.038194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:33.038212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:33.047984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:33.048060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:33.048076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:33.048083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:33.048090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:33.048103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.989 [2024-07-16 00:06:33.058134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.989 [2024-07-16 00:06:33.058197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.989 [2024-07-16 00:06:33.058213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.989 [2024-07-16 00:06:33.058220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.989 [2024-07-16 00:06:33.058226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.989 [2024-07-16 00:06:33.058244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.989 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.068158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.068227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.068247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.068255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.068262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.068276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.078192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.078278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.078294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.078301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.078308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.078322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.088210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.088276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.088295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.088302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.088308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.088322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.098243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.098306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.098321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.098328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.098334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.098348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.108288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.108354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.108371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.108378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.108384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.108399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.118296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.118396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.118412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.118420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.118426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.118439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.128313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.128387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.128403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.128410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.128416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.128434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.138366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.138435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.138452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.138459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.138466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.138480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.148383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.148448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.148464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.148471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.148477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.148491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.158409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.158494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.158509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.158517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.158524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.158537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:17.990 [2024-07-16 00:06:33.168446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.990 [2024-07-16 00:06:33.168509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.990 [2024-07-16 00:06:33.168525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.990 [2024-07-16 00:06:33.168532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.990 [2024-07-16 00:06:33.168538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:17.990 [2024-07-16 00:06:33.168552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:17.990 qpair failed and we were unable to recover it. 00:30:18.251 [2024-07-16 00:06:33.178522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.251 [2024-07-16 00:06:33.178610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.251 [2024-07-16 00:06:33.178629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.251 [2024-07-16 00:06:33.178638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.251 [2024-07-16 00:06:33.178644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.251 [2024-07-16 00:06:33.178658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.251 qpair failed and we were unable to recover it. 00:30:18.251 [2024-07-16 00:06:33.188470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.251 [2024-07-16 00:06:33.188536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.251 [2024-07-16 00:06:33.188551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.251 [2024-07-16 00:06:33.188558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.251 [2024-07-16 00:06:33.188565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.251 [2024-07-16 00:06:33.188579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.251 qpair failed and we were unable to recover it. 00:30:18.251 [2024-07-16 00:06:33.198528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.251 [2024-07-16 00:06:33.198603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.251 [2024-07-16 00:06:33.198619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.251 [2024-07-16 00:06:33.198626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.251 [2024-07-16 00:06:33.198632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.251 [2024-07-16 00:06:33.198646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.251 qpair failed and we were unable to recover it. 00:30:18.251 [2024-07-16 00:06:33.208546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.251 [2024-07-16 00:06:33.208614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.208630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.208636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.208643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.208656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.218601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.218668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.218683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.218690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.218697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.218715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.228612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.228680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.228696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.228702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.228709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.228723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.238647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.238726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.238742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.238749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.238756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.238770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.248677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.248747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.248761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.248769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.248775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.248789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.258634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.258733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.258749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.258757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.258763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.258777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.268735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.268805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.268825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.268832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.268838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.268853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.278731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.278802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.278818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.278825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.278831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.278845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.288794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.288857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.288873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.288880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.288886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.288900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.298861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.298925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.298940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.298947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.298953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.298967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.308842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.308969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.308995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.309004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.309016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.309035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.318861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.318938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.318963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.318972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.318979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.318998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.328883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.328955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.328980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.328988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.328996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.329015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.252 [2024-07-16 00:06:33.338941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.252 [2024-07-16 00:06:33.339019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.252 [2024-07-16 00:06:33.339045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.252 [2024-07-16 00:06:33.339054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.252 [2024-07-16 00:06:33.339061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.252 [2024-07-16 00:06:33.339080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.252 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.348971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.349036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.349053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.349061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.349067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.349082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.358980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.359059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.359075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.359082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.359088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.359103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.368966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.369026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.369042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.369049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.369055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.369069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.379035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.379101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.379116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.379124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.379130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.379143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.389089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.389156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.389171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.389178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.389185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.389198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.399075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.399149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.399167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.399176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.399187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.399202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.409108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.409169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.409185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.409192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.409198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.409212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.419199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.419261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.419277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.419284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.419290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.419305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.429174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.429245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.429260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.429267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.429273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.429288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.253 [2024-07-16 00:06:33.439100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.253 [2024-07-16 00:06:33.439199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.253 [2024-07-16 00:06:33.439215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.253 [2024-07-16 00:06:33.439222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.253 [2024-07-16 00:06:33.439234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.253 [2024-07-16 00:06:33.439248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.253 qpair failed and we were unable to recover it. 00:30:18.514 [2024-07-16 00:06:33.449196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.514 [2024-07-16 00:06:33.449268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.514 [2024-07-16 00:06:33.449283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.514 [2024-07-16 00:06:33.449291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.514 [2024-07-16 00:06:33.449297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.514 [2024-07-16 00:06:33.449311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.514 qpair failed and we were unable to recover it. 00:30:18.514 [2024-07-16 00:06:33.459249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.514 [2024-07-16 00:06:33.459390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.514 [2024-07-16 00:06:33.459406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.459413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.459419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.459433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.469290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.469357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.469373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.469380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.469387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.469401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.479319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.479386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.479402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.479409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.479415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.479429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.489363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.489430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.489446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.489457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.489463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.489477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.499377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.499451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.499466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.499473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.499480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.499494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.509399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.509490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.509506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.509513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.509519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.509533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.519412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.519502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.519517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.519524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.519530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.519544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.529412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.529472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.529487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.529494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.529500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.529513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.539361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.539424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.539440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.539446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.539453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.539467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.549596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.549659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.549675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.549681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.549688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.549701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.559528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.559597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.559613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.559620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.559626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.559639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.569524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.515 [2024-07-16 00:06:33.569582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.515 [2024-07-16 00:06:33.569597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.515 [2024-07-16 00:06:33.569604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.515 [2024-07-16 00:06:33.569611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.515 [2024-07-16 00:06:33.569624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.515 qpair failed and we were unable to recover it. 00:30:18.515 [2024-07-16 00:06:33.579526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.579589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.579604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.579615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.579621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.579635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.589692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.589760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.589776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.589783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.589789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.589803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.599464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.599530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.599545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.599552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.599558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.599572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.609617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.609678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.609693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.609700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.609706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.609720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.619651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.619713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.619728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.619735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.619742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.619755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.629711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.629866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.629882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.629889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.629895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.629909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.639715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.639805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.639821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.639828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.639834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.639848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.649758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.649816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.649831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.649838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.649844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.649858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.659747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.659811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.659826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.659833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.659839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.659853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.669825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.669922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.669938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.669949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.669955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.669969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.679819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.679883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.679898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.679905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.679911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.679925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.689839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.689910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.689935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.689944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.516 [2024-07-16 00:06:33.689951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.516 [2024-07-16 00:06:33.689970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.516 qpair failed and we were unable to recover it. 00:30:18.516 [2024-07-16 00:06:33.699910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.516 [2024-07-16 00:06:33.700013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.516 [2024-07-16 00:06:33.700039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.516 [2024-07-16 00:06:33.700047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.517 [2024-07-16 00:06:33.700054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.517 [2024-07-16 00:06:33.700072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.517 qpair failed and we were unable to recover it. 00:30:18.778 [2024-07-16 00:06:33.709931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.778 [2024-07-16 00:06:33.710003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.778 [2024-07-16 00:06:33.710028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.778 [2024-07-16 00:06:33.710038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.778 [2024-07-16 00:06:33.710045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.778 [2024-07-16 00:06:33.710064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.778 qpair failed and we were unable to recover it. 00:30:18.778 [2024-07-16 00:06:33.720009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.778 [2024-07-16 00:06:33.720096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.778 [2024-07-16 00:06:33.720121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.778 [2024-07-16 00:06:33.720130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.778 [2024-07-16 00:06:33.720137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.778 [2024-07-16 00:06:33.720156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.778 qpair failed and we were unable to recover it. 00:30:18.778 [2024-07-16 00:06:33.729960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.778 [2024-07-16 00:06:33.730019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.778 [2024-07-16 00:06:33.730036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.778 [2024-07-16 00:06:33.730043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.778 [2024-07-16 00:06:33.730049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.778 [2024-07-16 00:06:33.730064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.778 qpair failed and we were unable to recover it. 00:30:18.778 [2024-07-16 00:06:33.740019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.778 [2024-07-16 00:06:33.740082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.778 [2024-07-16 00:06:33.740098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.778 [2024-07-16 00:06:33.740105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.778 [2024-07-16 00:06:33.740111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.778 [2024-07-16 00:06:33.740125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.778 qpair failed and we were unable to recover it. 00:30:18.778 [2024-07-16 00:06:33.750086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.778 [2024-07-16 00:06:33.750156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.750171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.750178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.750184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.750198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.760035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.760106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.760125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.760133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.760139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.760153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.770014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.770077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.770093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.770100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.770106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.770120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.780095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.780161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.780176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.780183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.780190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.780204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.790172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.790258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.790274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.790281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.790288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.790302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.800136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.800255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.800271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.800278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.800285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.800299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.810160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.810223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.810242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.810249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.810255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.810270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.820201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.820265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.820281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.820288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.820294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.820308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.830299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.830369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.830384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.830391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.830397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.830411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.840239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.840305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.840320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.840327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.840333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.840347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.850295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.850354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.850372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.779 [2024-07-16 00:06:33.850379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.779 [2024-07-16 00:06:33.850385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.779 [2024-07-16 00:06:33.850399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.779 qpair failed and we were unable to recover it. 00:30:18.779 [2024-07-16 00:06:33.860318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.779 [2024-07-16 00:06:33.860382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.779 [2024-07-16 00:06:33.860397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.860404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.860410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.860424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.870396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.870467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.870482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.870489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.870495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.870509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.880239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.880315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.880330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.880337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.880343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.880358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.890298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.890359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.890374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.890381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.890387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.890404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.900354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.900424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.900439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.900446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.900452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.900467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.910509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.910574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.910588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.910596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.910602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.910615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.920524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.920591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.920606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.920613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.920620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.920634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.930500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.930615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.930631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.930638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.930644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.930657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.940519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.940580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.940599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.940606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.940613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.940626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.950596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.950665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.950680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.950687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.950693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.950707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:18.780 [2024-07-16 00:06:33.960594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.780 [2024-07-16 00:06:33.960664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.780 [2024-07-16 00:06:33.960680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.780 [2024-07-16 00:06:33.960687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.780 [2024-07-16 00:06:33.960693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:18.780 [2024-07-16 00:06:33.960707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:18.780 qpair failed and we were unable to recover it. 00:30:19.043 [2024-07-16 00:06:33.970489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.043 [2024-07-16 00:06:33.970559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.043 [2024-07-16 00:06:33.970574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.043 [2024-07-16 00:06:33.970581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.043 [2024-07-16 00:06:33.970587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.043 [2024-07-16 00:06:33.970602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.043 qpair failed and we were unable to recover it. 00:30:19.043 [2024-07-16 00:06:33.980637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.043 [2024-07-16 00:06:33.980699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.043 [2024-07-16 00:06:33.980714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.043 [2024-07-16 00:06:33.980721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.043 [2024-07-16 00:06:33.980728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.043 [2024-07-16 00:06:33.980745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.043 qpair failed and we were unable to recover it. 00:30:19.043 [2024-07-16 00:06:33.990590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.043 [2024-07-16 00:06:33.990658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.043 [2024-07-16 00:06:33.990673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.043 [2024-07-16 00:06:33.990680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.043 [2024-07-16 00:06:33.990686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.043 [2024-07-16 00:06:33.990699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.043 qpair failed and we were unable to recover it. 00:30:19.043 [2024-07-16 00:06:34.000703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.043 [2024-07-16 00:06:34.000767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.043 [2024-07-16 00:06:34.000782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.043 [2024-07-16 00:06:34.000789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.043 [2024-07-16 00:06:34.000795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.043 [2024-07-16 00:06:34.000809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.043 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.010709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.010768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.010783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.010791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.010797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.010810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.020759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.020816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.020832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.020838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.020844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.020858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.030781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.030842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.030861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.030868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.030873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.030887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.040677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.040751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.040768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.040775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.040782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.040796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.050838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.050896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.050911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.050918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.050924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.050938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.060845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.060911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.060937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.060946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.060953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.060972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.070878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.070947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.070973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.070983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.070994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.071013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.080910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.080986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.081011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.081020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.081027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.081045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.090890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.090954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.090979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.090987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.090995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.091014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.100990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.101051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.101068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.101075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.101081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.101096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.111019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.111115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.111131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.044 [2024-07-16 00:06:34.111139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.044 [2024-07-16 00:06:34.111145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.044 [2024-07-16 00:06:34.111159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.044 qpair failed and we were unable to recover it. 00:30:19.044 [2024-07-16 00:06:34.120998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.044 [2024-07-16 00:06:34.121068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.044 [2024-07-16 00:06:34.121084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.121091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.121097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.121111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.131022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.131078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.131094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.131101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.131107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.131122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.141061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.141120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.141137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.141144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.141150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.141165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.151135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.151194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.151209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.151216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.151222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.151240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.161083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.161146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.161161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.161168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.161179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.161193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.171141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.171200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.171216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.171223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.171232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.171247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.181169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.181247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.181266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.181274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.181280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.181295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.191218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.191326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.191343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.191349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.191356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.191370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.201240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.201302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.201317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.201324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.201330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.201344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.211248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.211311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.211326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.211333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.211340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.211353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.221232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.221293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.221308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.221315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.221321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.221334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.045 [2024-07-16 00:06:34.231316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.045 [2024-07-16 00:06:34.231397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.045 [2024-07-16 00:06:34.231412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.045 [2024-07-16 00:06:34.231420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.045 [2024-07-16 00:06:34.231426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.045 [2024-07-16 00:06:34.231439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.045 qpair failed and we were unable to recover it. 00:30:19.308 [2024-07-16 00:06:34.241361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.308 [2024-07-16 00:06:34.241426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.308 [2024-07-16 00:06:34.241441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.308 [2024-07-16 00:06:34.241448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.308 [2024-07-16 00:06:34.241454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.308 [2024-07-16 00:06:34.241468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.308 qpair failed and we were unable to recover it. 00:30:19.308 [2024-07-16 00:06:34.251368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.308 [2024-07-16 00:06:34.251471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.308 [2024-07-16 00:06:34.251486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.308 [2024-07-16 00:06:34.251493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.308 [2024-07-16 00:06:34.251504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.308 [2024-07-16 00:06:34.251518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.308 qpair failed and we were unable to recover it. 00:30:19.308 [2024-07-16 00:06:34.261264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.308 [2024-07-16 00:06:34.261325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.308 [2024-07-16 00:06:34.261340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.308 [2024-07-16 00:06:34.261347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.308 [2024-07-16 00:06:34.261353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.308 [2024-07-16 00:06:34.261367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.308 qpair failed and we were unable to recover it. 00:30:19.308 [2024-07-16 00:06:34.271410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.308 [2024-07-16 00:06:34.271475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.308 [2024-07-16 00:06:34.271491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.308 [2024-07-16 00:06:34.271498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.308 [2024-07-16 00:06:34.271504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.308 [2024-07-16 00:06:34.271519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.308 qpair failed and we were unable to recover it. 00:30:19.308 [2024-07-16 00:06:34.281478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.308 [2024-07-16 00:06:34.281547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.308 [2024-07-16 00:06:34.281563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.308 [2024-07-16 00:06:34.281573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.308 [2024-07-16 00:06:34.281580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.308 [2024-07-16 00:06:34.281595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.308 qpair failed and we were unable to recover it. 00:30:19.308 [2024-07-16 00:06:34.291497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.308 [2024-07-16 00:06:34.291588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.308 [2024-07-16 00:06:34.291604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.308 [2024-07-16 00:06:34.291611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.308 [2024-07-16 00:06:34.291617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.308 [2024-07-16 00:06:34.291631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.308 qpair failed and we were unable to recover it. 00:30:19.308 [2024-07-16 00:06:34.301520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.308 [2024-07-16 00:06:34.301575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.308 [2024-07-16 00:06:34.301591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.308 [2024-07-16 00:06:34.301597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.308 [2024-07-16 00:06:34.301604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.308 [2024-07-16 00:06:34.301617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.308 qpair failed and we were unable to recover it. 00:30:19.308 [2024-07-16 00:06:34.311538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.308 [2024-07-16 00:06:34.311601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.308 [2024-07-16 00:06:34.311616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.308 [2024-07-16 00:06:34.311623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.311629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.309 [2024-07-16 00:06:34.311643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.309 qpair failed and we were unable to recover it. 00:30:19.309 [2024-07-16 00:06:34.321539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.309 [2024-07-16 00:06:34.321637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.309 [2024-07-16 00:06:34.321653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.309 [2024-07-16 00:06:34.321660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.321667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.309 [2024-07-16 00:06:34.321681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.309 qpair failed and we were unable to recover it. 00:30:19.309 [2024-07-16 00:06:34.331589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.309 [2024-07-16 00:06:34.331652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.309 [2024-07-16 00:06:34.331667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.309 [2024-07-16 00:06:34.331674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.331680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.309 [2024-07-16 00:06:34.331694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.309 qpair failed and we were unable to recover it. 00:30:19.309 [2024-07-16 00:06:34.341598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.309 [2024-07-16 00:06:34.341653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.309 [2024-07-16 00:06:34.341668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.309 [2024-07-16 00:06:34.341679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.341685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.309 [2024-07-16 00:06:34.341698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.309 qpair failed and we were unable to recover it. 00:30:19.309 [2024-07-16 00:06:34.351712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.309 [2024-07-16 00:06:34.351768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.309 [2024-07-16 00:06:34.351783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.309 [2024-07-16 00:06:34.351790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.351796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.309 [2024-07-16 00:06:34.351810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.309 qpair failed and we were unable to recover it. 00:30:19.309 [2024-07-16 00:06:34.361664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.309 [2024-07-16 00:06:34.361769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.309 [2024-07-16 00:06:34.361785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.309 [2024-07-16 00:06:34.361792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.361798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.309 [2024-07-16 00:06:34.361812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.309 qpair failed and we were unable to recover it. 00:30:19.309 [2024-07-16 00:06:34.371602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.309 [2024-07-16 00:06:34.371661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.309 [2024-07-16 00:06:34.371677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.309 [2024-07-16 00:06:34.371683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.371690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.309 [2024-07-16 00:06:34.371704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.309 qpair failed and we were unable to recover it. 00:30:19.309 [2024-07-16 00:06:34.381717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.309 [2024-07-16 00:06:34.381775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.309 [2024-07-16 00:06:34.381790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.309 [2024-07-16 00:06:34.381797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.381804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.309 [2024-07-16 00:06:34.381817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.309 qpair failed and we were unable to recover it. 00:30:19.309 [2024-07-16 00:06:34.391746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.309 [2024-07-16 00:06:34.391808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.309 [2024-07-16 00:06:34.391823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.309 [2024-07-16 00:06:34.391830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.391836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.309 [2024-07-16 00:06:34.391850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.309 qpair failed and we were unable to recover it. 00:30:19.309 [2024-07-16 00:06:34.401764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.309 [2024-07-16 00:06:34.401828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.309 [2024-07-16 00:06:34.401843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.309 [2024-07-16 00:06:34.401850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.309 [2024-07-16 00:06:34.401856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.401869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.310 [2024-07-16 00:06:34.411796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.310 [2024-07-16 00:06:34.411855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.310 [2024-07-16 00:06:34.411870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.310 [2024-07-16 00:06:34.411877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.310 [2024-07-16 00:06:34.411883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.411897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.310 [2024-07-16 00:06:34.421717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.310 [2024-07-16 00:06:34.421775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.310 [2024-07-16 00:06:34.421792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.310 [2024-07-16 00:06:34.421800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.310 [2024-07-16 00:06:34.421806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.421821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.310 [2024-07-16 00:06:34.431861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.310 [2024-07-16 00:06:34.431918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.310 [2024-07-16 00:06:34.431934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.310 [2024-07-16 00:06:34.431945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.310 [2024-07-16 00:06:34.431951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.431965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.310 [2024-07-16 00:06:34.441892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.310 [2024-07-16 00:06:34.441965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.310 [2024-07-16 00:06:34.441990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.310 [2024-07-16 00:06:34.442000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.310 [2024-07-16 00:06:34.442006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.442025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.310 [2024-07-16 00:06:34.451792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.310 [2024-07-16 00:06:34.451861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.310 [2024-07-16 00:06:34.451886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.310 [2024-07-16 00:06:34.451896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.310 [2024-07-16 00:06:34.451902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.451922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.310 [2024-07-16 00:06:34.461941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.310 [2024-07-16 00:06:34.462008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.310 [2024-07-16 00:06:34.462033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.310 [2024-07-16 00:06:34.462043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.310 [2024-07-16 00:06:34.462050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.462068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.310 [2024-07-16 00:06:34.471973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.310 [2024-07-16 00:06:34.472039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.310 [2024-07-16 00:06:34.472065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.310 [2024-07-16 00:06:34.472073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.310 [2024-07-16 00:06:34.472080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.472099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.310 [2024-07-16 00:06:34.481869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.310 [2024-07-16 00:06:34.481931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.310 [2024-07-16 00:06:34.481948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.310 [2024-07-16 00:06:34.481956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.310 [2024-07-16 00:06:34.481962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.481977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.310 [2024-07-16 00:06:34.491888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.310 [2024-07-16 00:06:34.491967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.310 [2024-07-16 00:06:34.491982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.310 [2024-07-16 00:06:34.491990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.310 [2024-07-16 00:06:34.491996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.310 [2024-07-16 00:06:34.492010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.310 qpair failed and we were unable to recover it. 00:30:19.572 [2024-07-16 00:06:34.502034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.572 [2024-07-16 00:06:34.502092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.572 [2024-07-16 00:06:34.502107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.572 [2024-07-16 00:06:34.502114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.572 [2024-07-16 00:06:34.502121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.572 [2024-07-16 00:06:34.502135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.572 qpair failed and we were unable to recover it. 00:30:19.572 [2024-07-16 00:06:34.512072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.572 [2024-07-16 00:06:34.512131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.572 [2024-07-16 00:06:34.512146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.572 [2024-07-16 00:06:34.512154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.572 [2024-07-16 00:06:34.512162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.572 [2024-07-16 00:06:34.512177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.572 qpair failed and we were unable to recover it. 00:30:19.572 [2024-07-16 00:06:34.522126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.572 [2024-07-16 00:06:34.522191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.572 [2024-07-16 00:06:34.522210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.572 [2024-07-16 00:06:34.522218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.572 [2024-07-16 00:06:34.522224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.572 [2024-07-16 00:06:34.522242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.572 qpair failed and we were unable to recover it. 00:30:19.572 [2024-07-16 00:06:34.532143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.572 [2024-07-16 00:06:34.532201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.572 [2024-07-16 00:06:34.532216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.572 [2024-07-16 00:06:34.532224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.572 [2024-07-16 00:06:34.532235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.572 [2024-07-16 00:06:34.532249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.572 qpair failed and we were unable to recover it. 00:30:19.572 [2024-07-16 00:06:34.542136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.572 [2024-07-16 00:06:34.542196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.572 [2024-07-16 00:06:34.542211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.572 [2024-07-16 00:06:34.542219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.572 [2024-07-16 00:06:34.542225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.572 [2024-07-16 00:06:34.542245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.572 qpair failed and we were unable to recover it. 00:30:19.572 [2024-07-16 00:06:34.552163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.572 [2024-07-16 00:06:34.552224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.572 [2024-07-16 00:06:34.552243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.552250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.552257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.552271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.562189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.562263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.562278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.562285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.562291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.562305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.572257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.572368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.572383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.572390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.572396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.572411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.582130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.582194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.582209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.582217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.582223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.582241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.592170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.592238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.592253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.592260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.592266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.592280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.602562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.602625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.602641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.602648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.602654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.602668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.612324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.612388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.612407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.612414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.612420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.612434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.622370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.622428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.622444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.622451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.622457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.622471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.632371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.632436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.632451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.632458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.632465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.632479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.642412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.642475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.642490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.642497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.642503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.642517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.652494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.652565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.652580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.652587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.652593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.573 [2024-07-16 00:06:34.652610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.573 qpair failed and we were unable to recover it. 00:30:19.573 [2024-07-16 00:06:34.662465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.573 [2024-07-16 00:06:34.662530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.573 [2024-07-16 00:06:34.662545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.573 [2024-07-16 00:06:34.662552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.573 [2024-07-16 00:06:34.662559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.662572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.574 [2024-07-16 00:06:34.672517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.574 [2024-07-16 00:06:34.672574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.574 [2024-07-16 00:06:34.672589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.574 [2024-07-16 00:06:34.672596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.574 [2024-07-16 00:06:34.672602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.672616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.574 [2024-07-16 00:06:34.682521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.574 [2024-07-16 00:06:34.682582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.574 [2024-07-16 00:06:34.682597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.574 [2024-07-16 00:06:34.682604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.574 [2024-07-16 00:06:34.682610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.682624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.574 [2024-07-16 00:06:34.692440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.574 [2024-07-16 00:06:34.692497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.574 [2024-07-16 00:06:34.692513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.574 [2024-07-16 00:06:34.692520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.574 [2024-07-16 00:06:34.692526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.692539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.574 [2024-07-16 00:06:34.702568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.574 [2024-07-16 00:06:34.702628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.574 [2024-07-16 00:06:34.702647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.574 [2024-07-16 00:06:34.702654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.574 [2024-07-16 00:06:34.702660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.702674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.574 [2024-07-16 00:06:34.712612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.574 [2024-07-16 00:06:34.712674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.574 [2024-07-16 00:06:34.712689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.574 [2024-07-16 00:06:34.712696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.574 [2024-07-16 00:06:34.712702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.712716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.574 [2024-07-16 00:06:34.722626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.574 [2024-07-16 00:06:34.722687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.574 [2024-07-16 00:06:34.722703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.574 [2024-07-16 00:06:34.722710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.574 [2024-07-16 00:06:34.722716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.722730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.574 [2024-07-16 00:06:34.732645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.574 [2024-07-16 00:06:34.732723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.574 [2024-07-16 00:06:34.732738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.574 [2024-07-16 00:06:34.732745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.574 [2024-07-16 00:06:34.732751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.732765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.574 [2024-07-16 00:06:34.742671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.574 [2024-07-16 00:06:34.742738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.574 [2024-07-16 00:06:34.742753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.574 [2024-07-16 00:06:34.742760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.574 [2024-07-16 00:06:34.742766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.742784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.574 [2024-07-16 00:06:34.752722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.574 [2024-07-16 00:06:34.752830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.574 [2024-07-16 00:06:34.752846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.574 [2024-07-16 00:06:34.752853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.574 [2024-07-16 00:06:34.752859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.574 [2024-07-16 00:06:34.752873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.574 qpair failed and we were unable to recover it. 00:30:19.836 [2024-07-16 00:06:34.762731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.836 [2024-07-16 00:06:34.762795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.836 [2024-07-16 00:06:34.762810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.836 [2024-07-16 00:06:34.762817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.836 [2024-07-16 00:06:34.762824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.836 [2024-07-16 00:06:34.762837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.836 qpair failed and we were unable to recover it. 00:30:19.836 [2024-07-16 00:06:34.772751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.836 [2024-07-16 00:06:34.772810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.836 [2024-07-16 00:06:34.772826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.836 [2024-07-16 00:06:34.772833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.836 [2024-07-16 00:06:34.772840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.772854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.782807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.782864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.782880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.782887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.782894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.782908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.792824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.792886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.792904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.792911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.792918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.792931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.802846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.802911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.802925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.802932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.802938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.802952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.812877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.812941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.812967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.812976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.812983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.813001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.822925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.822989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.823014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.823023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.823029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.823049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.832949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.833014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.833039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.833048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.833059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.833079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.842952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.843024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.843049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.843057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.843065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.843083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.852985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.853081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.853098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.853105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.853112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.853127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.863036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.863096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.863111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.863118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.863124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.863138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.873041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.873099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.873115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.873122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.873128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.837 [2024-07-16 00:06:34.873142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.837 qpair failed and we were unable to recover it. 00:30:19.837 [2024-07-16 00:06:34.883060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.837 [2024-07-16 00:06:34.883129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.837 [2024-07-16 00:06:34.883144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.837 [2024-07-16 00:06:34.883151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.837 [2024-07-16 00:06:34.883157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.883171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.892962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.893026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.893042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.893049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.893055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.893068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.903102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.903166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.903181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.903188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.903194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.903208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.913025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.913103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.913118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.913125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.913132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.913146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.923168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.923234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.923250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.923257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.923267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.923281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.933187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.933249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.933264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.933271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.933277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.933291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.943169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.943236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.943252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.943259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.943266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.943280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.953253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.953313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.953327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.953334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.953340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.953354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.963335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.963407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.963423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.963430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.963436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.963450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.973282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.973349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.973365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.973372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.973378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.973392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.983312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.983383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.838 [2024-07-16 00:06:34.983399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.838 [2024-07-16 00:06:34.983405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.838 [2024-07-16 00:06:34.983411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.838 [2024-07-16 00:06:34.983425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.838 qpair failed and we were unable to recover it. 00:30:19.838 [2024-07-16 00:06:34.993344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.838 [2024-07-16 00:06:34.993406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.839 [2024-07-16 00:06:34.993421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.839 [2024-07-16 00:06:34.993428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.839 [2024-07-16 00:06:34.993434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.839 [2024-07-16 00:06:34.993448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.839 qpair failed and we were unable to recover it. 00:30:19.839 [2024-07-16 00:06:35.003359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.839 [2024-07-16 00:06:35.003420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.839 [2024-07-16 00:06:35.003436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.839 [2024-07-16 00:06:35.003443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.839 [2024-07-16 00:06:35.003449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.839 [2024-07-16 00:06:35.003462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.839 qpair failed and we were unable to recover it. 00:30:19.839 [2024-07-16 00:06:35.013415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.839 [2024-07-16 00:06:35.013534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.839 [2024-07-16 00:06:35.013550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.839 [2024-07-16 00:06:35.013557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.839 [2024-07-16 00:06:35.013567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.839 [2024-07-16 00:06:35.013581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.839 qpair failed and we were unable to recover it. 00:30:19.839 [2024-07-16 00:06:35.023419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.839 [2024-07-16 00:06:35.023482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.839 [2024-07-16 00:06:35.023498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.839 [2024-07-16 00:06:35.023505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.839 [2024-07-16 00:06:35.023511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:19.839 [2024-07-16 00:06:35.023525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.839 qpair failed and we were unable to recover it. 00:30:20.100 [2024-07-16 00:06:35.033455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.100 [2024-07-16 00:06:35.033518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.100 [2024-07-16 00:06:35.033534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.033541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.033547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.033562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.043509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.043571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.043586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.043593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.043600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.043613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.053517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.053577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.053593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.053600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.053606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.053620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.063427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.063487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.063502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.063510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.063516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.063529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.073572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.073640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.073655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.073662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.073668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.073682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.083586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.083651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.083667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.083674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.083680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.083694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.093619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.093674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.093690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.093697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.093704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.093717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.103642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.103749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.103765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.103777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.103783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.103797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.113685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.113746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.113761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.113768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.113774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.113788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.123663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.123726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.123741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.123748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.123754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.123768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.133725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.133781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.133796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.133803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.133809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.133823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.101 [2024-07-16 00:06:35.143746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.101 [2024-07-16 00:06:35.143821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.101 [2024-07-16 00:06:35.143838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.101 [2024-07-16 00:06:35.143845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.101 [2024-07-16 00:06:35.143852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.101 [2024-07-16 00:06:35.143867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.101 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.153801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.153885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.153900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.153907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.153913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.153928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.163846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.163909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.163925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.163931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.163938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.163951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.173887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.173953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.173979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.173987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.173995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.174015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.183748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.183813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.183832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.183839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.183846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.183862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.193895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.193956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.193972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.193984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.193990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.194005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.203931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.203998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.204013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.204021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.204027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.204040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.213952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.214007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.214023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.214029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.214036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.214050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.223975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.224034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.224049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.224056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.224062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.224076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.234013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.234072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.234087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.234094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.234100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.234114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.244020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.244079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.244094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.244101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.244108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.244121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.254041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.254102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.254117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.254124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.102 [2024-07-16 00:06:35.254130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.102 [2024-07-16 00:06:35.254144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.102 qpair failed and we were unable to recover it. 00:30:20.102 [2024-07-16 00:06:35.263966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.102 [2024-07-16 00:06:35.264034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.102 [2024-07-16 00:06:35.264051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.102 [2024-07-16 00:06:35.264058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.103 [2024-07-16 00:06:35.264064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.103 [2024-07-16 00:06:35.264079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.103 qpair failed and we were unable to recover it. 00:30:20.103 [2024-07-16 00:06:35.274112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.103 [2024-07-16 00:06:35.274174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.103 [2024-07-16 00:06:35.274190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.103 [2024-07-16 00:06:35.274197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.103 [2024-07-16 00:06:35.274203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.103 [2024-07-16 00:06:35.274217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.103 qpair failed and we were unable to recover it. 00:30:20.103 [2024-07-16 00:06:35.284154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.103 [2024-07-16 00:06:35.284219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.103 [2024-07-16 00:06:35.284238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.103 [2024-07-16 00:06:35.284248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.103 [2024-07-16 00:06:35.284254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.103 [2024-07-16 00:06:35.284269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.103 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.294166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.365 [2024-07-16 00:06:35.294221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.365 [2024-07-16 00:06:35.294239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.365 [2024-07-16 00:06:35.294246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.365 [2024-07-16 00:06:35.294252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.365 [2024-07-16 00:06:35.294267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.365 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.304201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.365 [2024-07-16 00:06:35.304258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.365 [2024-07-16 00:06:35.304274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.365 [2024-07-16 00:06:35.304281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.365 [2024-07-16 00:06:35.304287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.365 [2024-07-16 00:06:35.304301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.365 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.314101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.365 [2024-07-16 00:06:35.314158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.365 [2024-07-16 00:06:35.314173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.365 [2024-07-16 00:06:35.314180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.365 [2024-07-16 00:06:35.314186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.365 [2024-07-16 00:06:35.314200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.365 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.324252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.365 [2024-07-16 00:06:35.324319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.365 [2024-07-16 00:06:35.324334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.365 [2024-07-16 00:06:35.324341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.365 [2024-07-16 00:06:35.324347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.365 [2024-07-16 00:06:35.324360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.365 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.334268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.365 [2024-07-16 00:06:35.334324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.365 [2024-07-16 00:06:35.334339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.365 [2024-07-16 00:06:35.334346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.365 [2024-07-16 00:06:35.334352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.365 [2024-07-16 00:06:35.334366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.365 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.344301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.365 [2024-07-16 00:06:35.344360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.365 [2024-07-16 00:06:35.344375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.365 [2024-07-16 00:06:35.344382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.365 [2024-07-16 00:06:35.344389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.365 [2024-07-16 00:06:35.344403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.365 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.354328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.365 [2024-07-16 00:06:35.354384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.365 [2024-07-16 00:06:35.354399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.365 [2024-07-16 00:06:35.354406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.365 [2024-07-16 00:06:35.354412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.365 [2024-07-16 00:06:35.354426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.365 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.364380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.365 [2024-07-16 00:06:35.364447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.365 [2024-07-16 00:06:35.364462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.365 [2024-07-16 00:06:35.364469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.365 [2024-07-16 00:06:35.364475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.365 [2024-07-16 00:06:35.364488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.365 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.374470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.365 [2024-07-16 00:06:35.374529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.365 [2024-07-16 00:06:35.374547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.365 [2024-07-16 00:06:35.374554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.365 [2024-07-16 00:06:35.374560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.365 [2024-07-16 00:06:35.374574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.365 qpair failed and we were unable to recover it. 00:30:20.365 [2024-07-16 00:06:35.384418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.384475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.384491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.384497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.384504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.384517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.394523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.394581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.394596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.394603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.394609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.394623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.404470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.404537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.404551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.404559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.404565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.404578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.414367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.414425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.414440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.414447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.414453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.414471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.424405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.424459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.424475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.424483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.424489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.424503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.434563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.434622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.434638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.434646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.434652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.434666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.444576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.444641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.444656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.444663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.444670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.444683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.454588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.454646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.454661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.454668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.454674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.454688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.464621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.464686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.464705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.464712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.464718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.464731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.474537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.474594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.474609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.474616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.474623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.474636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.484678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.484747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.484763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.484770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.366 [2024-07-16 00:06:35.484776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.366 [2024-07-16 00:06:35.484790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.366 qpair failed and we were unable to recover it. 00:30:20.366 [2024-07-16 00:06:35.494709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.366 [2024-07-16 00:06:35.494766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.366 [2024-07-16 00:06:35.494781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.366 [2024-07-16 00:06:35.494788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.367 [2024-07-16 00:06:35.494794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.367 [2024-07-16 00:06:35.494807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.367 qpair failed and we were unable to recover it. 00:30:20.367 [2024-07-16 00:06:35.504739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.367 [2024-07-16 00:06:35.504793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.367 [2024-07-16 00:06:35.504808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.367 [2024-07-16 00:06:35.504815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.367 [2024-07-16 00:06:35.504821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.367 [2024-07-16 00:06:35.504838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.367 qpair failed and we were unable to recover it. 00:30:20.367 [2024-07-16 00:06:35.514752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.367 [2024-07-16 00:06:35.514815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.367 [2024-07-16 00:06:35.514830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.367 [2024-07-16 00:06:35.514837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.367 [2024-07-16 00:06:35.514843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.367 [2024-07-16 00:06:35.514857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.367 qpair failed and we were unable to recover it. 00:30:20.367 [2024-07-16 00:06:35.524869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.367 [2024-07-16 00:06:35.524932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.367 [2024-07-16 00:06:35.524947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.367 [2024-07-16 00:06:35.524954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.367 [2024-07-16 00:06:35.524960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.367 [2024-07-16 00:06:35.524973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.367 qpair failed and we were unable to recover it. 00:30:20.367 [2024-07-16 00:06:35.534803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.367 [2024-07-16 00:06:35.534869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.367 [2024-07-16 00:06:35.534894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.367 [2024-07-16 00:06:35.534903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.367 [2024-07-16 00:06:35.534910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.367 [2024-07-16 00:06:35.534929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.367 qpair failed and we were unable to recover it. 00:30:20.367 [2024-07-16 00:06:35.544706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.367 [2024-07-16 00:06:35.544769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.367 [2024-07-16 00:06:35.544786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.367 [2024-07-16 00:06:35.544794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.367 [2024-07-16 00:06:35.544800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.367 [2024-07-16 00:06:35.544815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.367 qpair failed and we were unable to recover it. 00:30:20.629 [2024-07-16 00:06:35.554773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.629 [2024-07-16 00:06:35.554833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.629 [2024-07-16 00:06:35.554854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.629 [2024-07-16 00:06:35.554861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.629 [2024-07-16 00:06:35.554867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.629 [2024-07-16 00:06:35.554882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.629 qpair failed and we were unable to recover it. 00:30:20.629 [2024-07-16 00:06:35.564869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.629 [2024-07-16 00:06:35.564936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.629 [2024-07-16 00:06:35.564951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.629 [2024-07-16 00:06:35.564958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.629 [2024-07-16 00:06:35.564964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.629 [2024-07-16 00:06:35.564978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.629 qpair failed and we were unable to recover it. 00:30:20.629 [2024-07-16 00:06:35.574904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.629 [2024-07-16 00:06:35.574969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.629 [2024-07-16 00:06:35.574984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.629 [2024-07-16 00:06:35.574991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.629 [2024-07-16 00:06:35.574997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.629 [2024-07-16 00:06:35.575011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.629 qpair failed and we were unable to recover it. 00:30:20.629 [2024-07-16 00:06:35.584940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.584998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.585013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.585020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.585026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.585040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.594986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.595046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.595061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.595068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.595074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.595092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.604991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.605059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.605074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.605081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.605087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.605101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.615034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.615110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.615127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.615138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.615144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.615159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.625062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.625120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.625135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.625143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.625149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.625163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.635077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.635135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.635150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.635157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.635164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.635177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.645088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.645147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.645166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.645174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.645180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.645194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.655129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.655190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.655205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.655212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.655218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.655236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.665152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.665264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.665280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.665287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.665293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.665308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.675179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.675243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.675259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.675266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.675273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.675287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.685218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.685287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.685302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.685309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.630 [2024-07-16 00:06:35.685319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.630 [2024-07-16 00:06:35.685334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.630 qpair failed and we were unable to recover it. 00:30:20.630 [2024-07-16 00:06:35.695138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.630 [2024-07-16 00:06:35.695251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.630 [2024-07-16 00:06:35.695267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.630 [2024-07-16 00:06:35.695283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.695289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.695304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.705248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.705310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.705326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.705334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.705340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.705354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.715289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.715352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.715367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.715375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.715381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.715395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.725315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.725379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.725395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.725402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.725408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.725422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.735332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.735416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.735431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.735438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.735445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.735459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.745362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.745422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.745437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.745444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.745450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.745464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.755390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.755451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.755466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.755473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.755479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.755493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.765398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.765465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.765479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.765486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.765492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.765506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.775456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.775513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.775528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.775535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.775544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.775558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.785450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.785515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.785530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.785537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.785543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.785557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.795517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.795579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.795594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.795601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.795607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.795621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.805533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.631 [2024-07-16 00:06:35.805638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.631 [2024-07-16 00:06:35.805654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.631 [2024-07-16 00:06:35.805661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.631 [2024-07-16 00:06:35.805667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.631 [2024-07-16 00:06:35.805681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.631 qpair failed and we were unable to recover it. 00:30:20.631 [2024-07-16 00:06:35.815555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.632 [2024-07-16 00:06:35.815614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.632 [2024-07-16 00:06:35.815629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.632 [2024-07-16 00:06:35.815637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.632 [2024-07-16 00:06:35.815643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.632 [2024-07-16 00:06:35.815656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.632 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-16 00:06:35.825559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.893 [2024-07-16 00:06:35.825634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.893 [2024-07-16 00:06:35.825649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.893 [2024-07-16 00:06:35.825656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.893 [2024-07-16 00:06:35.825662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.893 [2024-07-16 00:06:35.825676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-16 00:06:35.835599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.893 [2024-07-16 00:06:35.835660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.893 [2024-07-16 00:06:35.835675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.893 [2024-07-16 00:06:35.835682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.893 [2024-07-16 00:06:35.835688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.893 [2024-07-16 00:06:35.835702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.893 qpair failed and we were unable to recover it. 00:30:20.893 [2024-07-16 00:06:35.845618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.893 [2024-07-16 00:06:35.845681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.893 [2024-07-16 00:06:35.845696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.845703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.845709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.845723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.855658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.855715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.855730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.855737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.855743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.855757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.865719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.865801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.865816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.865828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.865834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.865848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.875744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.875804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.875819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.875826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.875833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.875846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.885750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.885811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.885825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.885833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.885839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.885852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.895746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.895806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.895821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.895828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.895834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.895848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.905785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.905843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.905858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.905866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.905872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.905886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.915823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.915882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.915897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.915904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.915910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.915924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.925773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.925883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.925899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.925905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.925911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.925925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.935879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.935945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.935960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.935967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.935973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.935987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.945897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.945962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.945987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.945996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.946004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.894 [2024-07-16 00:06:35.946023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.894 qpair failed and we were unable to recover it. 00:30:20.894 [2024-07-16 00:06:35.955932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.894 [2024-07-16 00:06:35.956003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.894 [2024-07-16 00:06:35.956029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.894 [2024-07-16 00:06:35.956042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.894 [2024-07-16 00:06:35.956049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:35.956068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:35.965927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:35.965995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:35.966021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:35.966029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:35.966036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:35.966055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:35.976006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:35.976067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:35.976092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:35.976101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:35.976108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:35.976126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:35.986006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:35.986071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:35.986087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:35.986094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:35.986101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:35.986116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:35.996036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:35.996108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:35.996124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:35.996131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:35.996138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:35.996152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:36.006076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:36.006140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:36.006156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:36.006163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:36.006169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:36.006182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:36.016121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:36.016186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:36.016201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:36.016208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:36.016214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:36.016234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:36.026171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:36.026247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:36.026263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:36.026271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:36.026277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:36.026291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:36.036155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:36.036221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:36.036239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:36.036246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:36.036253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:36.036268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:36.046186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:36.046255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:36.046270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:36.046281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:36.046287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:36.046301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:36.056207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:36.056267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:36.056283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:36.056290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:36.056296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:36.056311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:36.066255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.895 [2024-07-16 00:06:36.066315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.895 [2024-07-16 00:06:36.066330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.895 [2024-07-16 00:06:36.066337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.895 [2024-07-16 00:06:36.066344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.895 [2024-07-16 00:06:36.066358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.895 qpair failed and we were unable to recover it. 00:30:20.895 [2024-07-16 00:06:36.076242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:20.896 [2024-07-16 00:06:36.076349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:20.896 [2024-07-16 00:06:36.076365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:20.896 [2024-07-16 00:06:36.076373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:20.896 [2024-07-16 00:06:36.076379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:20.896 [2024-07-16 00:06:36.076393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.896 qpair failed and we were unable to recover it. 00:30:21.157 [2024-07-16 00:06:36.086310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.157 [2024-07-16 00:06:36.086398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.157 [2024-07-16 00:06:36.086413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.157 [2024-07-16 00:06:36.086421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.157 [2024-07-16 00:06:36.086427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.157 [2024-07-16 00:06:36.086441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.157 qpair failed and we were unable to recover it. 00:30:21.157 [2024-07-16 00:06:36.096310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.157 [2024-07-16 00:06:36.096369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.157 [2024-07-16 00:06:36.096384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.157 [2024-07-16 00:06:36.096392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.157 [2024-07-16 00:06:36.096398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.157 [2024-07-16 00:06:36.096411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.157 qpair failed and we were unable to recover it. 00:30:21.157 [2024-07-16 00:06:36.106350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.157 [2024-07-16 00:06:36.106417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.157 [2024-07-16 00:06:36.106432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.157 [2024-07-16 00:06:36.106439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.157 [2024-07-16 00:06:36.106445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.157 [2024-07-16 00:06:36.106460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.157 qpair failed and we were unable to recover it. 00:30:21.157 [2024-07-16 00:06:36.116371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.157 [2024-07-16 00:06:36.116443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.157 [2024-07-16 00:06:36.116458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.116466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.116472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.116487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.126400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.126456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.126471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.126478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.126484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.126498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.136446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.136503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.136522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.136529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.136535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.136548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.146463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.146523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.146539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.146546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.146553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.146567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.156469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.156584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.156600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.156607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.156613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.156628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.166394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.166454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.166469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.166476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.166482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.166496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.176592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.176669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.176684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.176692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.176698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.176712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.186449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.186505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.186520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.186527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.186534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.186547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.196594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.196655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.196670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.196677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.196684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.196697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.206617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.206678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.206694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.206701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.206707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.206721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.216630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.216690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.216705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.216712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.216718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.216732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.158 qpair failed and we were unable to recover it. 00:30:21.158 [2024-07-16 00:06:36.226656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.158 [2024-07-16 00:06:36.226714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.158 [2024-07-16 00:06:36.226733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.158 [2024-07-16 00:06:36.226740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.158 [2024-07-16 00:06:36.226746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.158 [2024-07-16 00:06:36.226760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.236706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.236776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.236791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.236798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.236804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.236818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.246606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.246673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.246688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.246695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.246701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.246715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.256749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.256853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.256868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.256875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.256881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.256895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.266768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.266824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.266840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.266847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.266853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.266870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.276807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.276867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.276882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.276889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.276895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.276909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.286833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.286904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.286919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.286926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.286932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.286946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.296862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.296971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.296997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.297006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.297013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.297031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.306910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.306980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.307005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.307015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.307021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.307040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.316921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.316988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.317017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.317027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.317034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.317053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.326939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.327033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.327050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.327058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.327065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.327080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.159 [2024-07-16 00:06:36.336973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.159 [2024-07-16 00:06:36.337077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.159 [2024-07-16 00:06:36.337092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.159 [2024-07-16 00:06:36.337100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.159 [2024-07-16 00:06:36.337106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.159 [2024-07-16 00:06:36.337121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.159 qpair failed and we were unable to recover it. 00:30:21.421 [2024-07-16 00:06:36.347029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.421 [2024-07-16 00:06:36.347089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.421 [2024-07-16 00:06:36.347105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.421 [2024-07-16 00:06:36.347112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.421 [2024-07-16 00:06:36.347118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.421 [2024-07-16 00:06:36.347132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-07-16 00:06:36.356953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.421 [2024-07-16 00:06:36.357013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.421 [2024-07-16 00:06:36.357029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.421 [2024-07-16 00:06:36.357036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.421 [2024-07-16 00:06:36.357042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.421 [2024-07-16 00:06:36.357060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-07-16 00:06:36.367061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.421 [2024-07-16 00:06:36.367125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.421 [2024-07-16 00:06:36.367140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.421 [2024-07-16 00:06:36.367147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.421 [2024-07-16 00:06:36.367153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.421 [2024-07-16 00:06:36.367167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-07-16 00:06:36.376958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.421 [2024-07-16 00:06:36.377025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.421 [2024-07-16 00:06:36.377040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.421 [2024-07-16 00:06:36.377049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.421 [2024-07-16 00:06:36.377056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.421 [2024-07-16 00:06:36.377072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-07-16 00:06:36.387106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.421 [2024-07-16 00:06:36.387173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.421 [2024-07-16 00:06:36.387188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.421 [2024-07-16 00:06:36.387195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.421 [2024-07-16 00:06:36.387201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.421 [2024-07-16 00:06:36.387214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-07-16 00:06:36.397018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.421 [2024-07-16 00:06:36.397084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.397099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.397106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.397112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.397126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.407178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.407241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.407260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.407268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.407274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.407288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.417216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.417325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.417341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.417348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.417355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.417369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.427222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.427285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.427303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.427310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.427316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.427331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.437263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.437323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.437339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.437346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.437352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.437366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.447284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.447348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.447364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.447371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.447381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.447395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.457273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.457327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.457343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.457350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.457356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.457370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.467320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.467382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.467397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.467404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.467411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.467425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.477304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.477363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.477378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.477385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.477391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.477405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.487372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.487435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.487451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.487458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.487464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.487478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.497393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.497454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.497469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.497476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.422 [2024-07-16 00:06:36.497482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.422 [2024-07-16 00:06:36.497496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-07-16 00:06:36.507442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.422 [2024-07-16 00:06:36.507502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.422 [2024-07-16 00:06:36.507517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.422 [2024-07-16 00:06:36.507524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.507530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.507544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.517479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.517540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.517555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.517562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.517568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.517583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.527512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.527577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.527592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.527599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.527605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.527620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.537521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.537579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.537594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.537601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.537611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.537625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.547547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.547651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.547667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.547679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.547686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.547702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.557566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.557625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.557641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.557648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.557654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.557668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.567594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.567659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.567675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.567682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.567688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.567701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.577507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.577572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.577588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.577595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.577601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.577616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.587645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.587705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.587721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.587728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.587734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.587748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.597674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.597741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.597756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.597763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.597770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.597785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-07-16 00:06:36.607711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.423 [2024-07-16 00:06:36.607770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.423 [2024-07-16 00:06:36.607784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.423 [2024-07-16 00:06:36.607792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.423 [2024-07-16 00:06:36.607798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.423 [2024-07-16 00:06:36.607811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.617783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.617870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.617886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.617894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.617900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.685 [2024-07-16 00:06:36.617914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.627752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.627810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.627826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.627833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.627843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.685 [2024-07-16 00:06:36.627857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.637772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.637833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.637848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.637855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.637861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.685 [2024-07-16 00:06:36.637875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.647806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.647867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.647882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.647889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.647895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.685 [2024-07-16 00:06:36.647909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.657825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.657887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.657902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.657910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.657917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.685 [2024-07-16 00:06:36.657932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.667892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.667958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.667983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.667992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.668000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.685 [2024-07-16 00:06:36.668019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.677779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.677844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.677869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.677878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.677885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.685 [2024-07-16 00:06:36.677907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.687892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.687955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.687973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.687980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.687987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.685 [2024-07-16 00:06:36.688002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.697935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.698040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.698056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.698064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.698070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.685 [2024-07-16 00:06:36.698085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.685 qpair failed and we were unable to recover it. 00:30:21.685 [2024-07-16 00:06:36.707956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.685 [2024-07-16 00:06:36.708011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.685 [2024-07-16 00:06:36.708027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.685 [2024-07-16 00:06:36.708034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.685 [2024-07-16 00:06:36.708041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.708056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.718006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.718070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.718086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.718098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.718104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.718120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.728013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.728076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.728091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.728098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.728104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.728118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.738045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.738101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.738116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.738123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.738129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.738143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.748075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.748132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.748147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.748154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.748160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.748174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.758114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.758174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.758190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.758197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.758203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.758217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.768119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.768181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.768196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.768203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.768210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.768224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.778149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.778209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.778224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.778235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.778241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.778256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.788056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.788117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.788132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.788139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.788146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.788159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.798201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.798265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.798280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.798287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.798294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x17e1a50 00:30:21.686 [2024-07-16 00:06:36.798308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.808289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.808432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.686 [2024-07-16 00:06:36.808495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.686 [2024-07-16 00:06:36.808531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.686 [2024-07-16 00:06:36.808551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb8d0000b90 00:30:21.686 [2024-07-16 00:06:36.808606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.686 qpair failed and we were unable to recover it. 00:30:21.686 [2024-07-16 00:06:36.818309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:21.686 [2024-07-16 00:06:36.818439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:21.687 [2024-07-16 00:06:36.818477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:21.687 [2024-07-16 00:06:36.818495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:21.687 [2024-07-16 00:06:36.818512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb8d0000b90 00:30:21.687 [2024-07-16 00:06:36.818550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.687 qpair failed and we were unable to recover it. 00:30:21.687 [2024-07-16 00:06:36.818710] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:21.687 A controller has encountered a failure and is being reset. 00:30:21.687 [2024-07-16 00:06:36.818814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17df800 (9): Bad file descriptor 00:30:21.687 Controller properly reset. 00:30:21.687 Initializing NVMe Controllers 00:30:21.687 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:21.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:21.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:21.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:21.687 Initialization complete. Launching workers. 00:30:21.687 Starting thread on core 1 00:30:21.687 Starting thread on core 2 00:30:21.687 Starting thread on core 3 00:30:21.687 Starting thread on core 0 00:30:21.687 00:06:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:21.687 00:30:21.687 real 0m11.384s 00:30:21.687 user 0m20.941s 00:30:21.687 sys 0m3.797s 00:30:21.687 00:06:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:21.687 00:06:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.687 ************************************ 00:30:21.687 END TEST nvmf_target_disconnect_tc2 00:30:21.687 ************************************ 00:30:21.947 00:06:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:30:21.947 00:06:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:21.948 rmmod nvme_tcp 00:30:21.948 rmmod nvme_fabrics 00:30:21.948 rmmod nvme_keyring 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 665255 ']' 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 665255 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@942 -- # '[' -z 665255 ']' 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # kill -0 665255 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # uname 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:30:21.948 00:06:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 665255 00:30:21.948 00:06:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_4 00:30:21.948 00:06:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_4 = sudo ']' 00:30:21.948 00:06:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 665255' 00:30:21.948 killing process with pid 665255 00:30:21.948 00:06:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@961 -- # kill 665255 00:30:21.948 00:06:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # wait 665255 00:30:22.208 00:06:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:22.208 00:06:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:22.208 00:06:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:22.208 00:06:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:22.208 00:06:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:22.208 00:06:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.208 00:06:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:22.208 00:06:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.121 00:06:39 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:24.121 00:30:24.121 real 0m22.454s 00:30:24.121 user 0m48.956s 00:30:24.121 sys 0m10.374s 00:30:24.121 00:06:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:24.121 00:06:39 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:24.121 ************************************ 00:30:24.121 END TEST nvmf_target_disconnect 00:30:24.121 ************************************ 00:30:24.121 00:06:39 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:30:24.121 00:06:39 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:24.121 00:06:39 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:24.121 00:06:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.121 00:06:39 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:24.121 00:30:24.121 real 23m23.330s 00:30:24.121 user 47m24.305s 00:30:24.121 sys 7m40.985s 00:30:24.121 00:06:39 nvmf_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:24.121 00:06:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.121 ************************************ 00:30:24.121 END TEST nvmf_tcp 00:30:24.121 ************************************ 00:30:24.383 00:06:39 -- common/autotest_common.sh@1136 -- # return 0 00:30:24.383 00:06:39 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:24.383 00:06:39 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:24.383 00:06:39 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:30:24.383 00:06:39 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:30:24.383 00:06:39 -- common/autotest_common.sh@10 -- # set +x 00:30:24.383 ************************************ 00:30:24.383 START TEST spdkcli_nvmf_tcp 00:30:24.383 ************************************ 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:24.383 * Looking for test storage... 00:30:24.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.383 00:06:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=667078 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 667078 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@823 -- # '[' -z 667078 ']' 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:30:24.384 00:06:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.645 [2024-07-16 00:06:39.578427] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:30:24.645 [2024-07-16 00:06:39.578499] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667078 ] 00:30:24.645 [2024-07-16 00:06:39.648996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:24.645 [2024-07-16 00:06:39.724208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.645 [2024-07-16 00:06:39.724212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # return 0 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:25.218 00:06:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:25.218 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:25.218 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:25.218 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:25.218 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:25.218 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:25.218 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:25.218 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:25.218 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:25.218 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:25.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:25.218 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:25.218 ' 00:30:27.760 [2024-07-16 00:06:42.718089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.701 [2024-07-16 00:06:43.881830] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:31.245 [2024-07-16 00:06:46.024088] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:33.152 [2024-07-16 00:06:47.861595] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:34.093 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:34.093 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:34.093 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:34.093 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:34.093 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:34.093 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:34.093 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:34.093 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:34.093 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:34.093 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:34.093 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:34.093 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:34.353 00:06:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:34.353 00:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:34.353 00:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:34.353 00:06:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:34.353 00:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:34.353 00:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:34.353 00:06:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:34.353 00:06:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:34.613 00:06:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:34.873 00:06:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:34.873 00:06:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:34.873 00:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:34.873 00:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:34.873 00:06:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:34.873 00:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:34.873 00:06:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:34.873 00:06:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:34.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:34.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:34.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:34.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:34.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:34.873 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:34.873 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:34.873 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:34.873 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:34.873 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:34.873 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:34.873 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:34.873 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:34.873 ' 00:30:40.221 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:40.221 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:40.221 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:40.221 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:40.221 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:40.221 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:40.221 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:40.221 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:40.221 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:40.221 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:40.221 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:40.221 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:40.221 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:40.221 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 667078 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@942 -- # '[' -z 667078 ']' 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # kill -0 667078 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # uname 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 667078 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 667078' 00:30:40.221 killing process with pid 667078 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@961 -- # kill 667078 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # wait 667078 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 667078 ']' 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 667078 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@942 -- # '[' -z 667078 ']' 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # kill -0 667078 00:30:40.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (667078) - No such process 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # echo 'Process with pid 667078 is not found' 00:30:40.221 Process with pid 667078 is not found 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:40.221 00:30:40.221 real 0m15.577s 00:30:40.221 user 0m32.089s 00:30:40.221 sys 0m0.709s 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:40.221 00:06:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:40.221 ************************************ 00:30:40.221 END TEST spdkcli_nvmf_tcp 00:30:40.221 ************************************ 00:30:40.221 00:06:55 -- common/autotest_common.sh@1136 -- # return 0 00:30:40.221 00:06:55 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:40.221 00:06:55 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:30:40.221 00:06:55 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:30:40.221 00:06:55 -- common/autotest_common.sh@10 -- # set +x 00:30:40.221 ************************************ 00:30:40.221 START TEST nvmf_identify_passthru 00:30:40.221 ************************************ 00:30:40.221 00:06:55 nvmf_identify_passthru -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:40.221 * Looking for test storage... 00:30:40.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:40.221 00:06:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.221 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.221 00:06:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.221 00:06:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.221 00:06:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.221 00:06:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.221 00:06:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.222 00:06:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.222 00:06:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:40.222 00:06:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:40.222 00:06:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.222 00:06:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.222 00:06:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.222 00:06:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.222 00:06:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.222 00:06:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.222 00:06:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.222 00:06:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:40.222 00:06:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.222 00:06:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.222 00:06:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:40.222 00:06:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:40.222 00:06:55 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:40.222 00:06:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:48.364 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:48.364 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:48.364 Found net devices under 0000:31:00.0: cvl_0_0 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:48.364 Found net devices under 0000:31:00.1: cvl_0_1 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:48.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:30:48.364 00:30:48.364 --- 10.0.0.2 ping statistics --- 00:30:48.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.364 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:30:48.364 00:30:48.364 --- 10.0.0.1 ping statistics --- 00:30:48.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.364 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:48.364 00:07:03 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:48.364 00:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:48.364 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:48.364 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:48.364 00:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:48.364 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # bdfs=() 00:30:48.364 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # local bdfs 00:30:48.364 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:30:48.364 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:30:48.364 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:30:48.364 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:30:48.365 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:48.365 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:48.365 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:30:48.625 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:30:48.625 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:65:00.0 00:30:48.625 00:07:03 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # echo 0000:65:00.0 00:30:48.625 00:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:48.625 00:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:48.625 00:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:48.625 00:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:48.625 00:07:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:49.196 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:30:49.196 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:49.196 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:49.196 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:49.457 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:49.457 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:49.457 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:49.457 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=674505 00:30:49.457 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:49.457 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:49.457 00:07:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 674505 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@823 -- # '[' -z 674505 ']' 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # local max_retries=100 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # xtrace_disable 00:30:49.457 00:07:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:49.717 [2024-07-16 00:07:04.695366] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:30:49.717 [2024-07-16 00:07:04.695421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.717 [2024-07-16 00:07:04.771020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:49.717 [2024-07-16 00:07:04.841946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.717 [2024-07-16 00:07:04.841986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.717 [2024-07-16 00:07:04.841994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.717 [2024-07-16 00:07:04.842000] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.717 [2024-07-16 00:07:04.842006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.717 [2024-07-16 00:07:04.842146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.717 [2024-07-16 00:07:04.842268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.717 [2024-07-16 00:07:04.842423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.717 [2024-07-16 00:07:04.842424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.290 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:30:50.290 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # return 0 00:30:50.290 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:50.290 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:50.290 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.290 INFO: Log level set to 20 00:30:50.290 INFO: Requests: 00:30:50.290 { 00:30:50.290 "jsonrpc": "2.0", 00:30:50.290 "method": "nvmf_set_config", 00:30:50.290 "id": 1, 00:30:50.290 "params": { 00:30:50.290 "admin_cmd_passthru": { 00:30:50.290 "identify_ctrlr": true 00:30:50.290 } 00:30:50.290 } 00:30:50.290 } 00:30:50.290 00:30:50.290 INFO: response: 00:30:50.290 { 00:30:50.290 "jsonrpc": "2.0", 00:30:50.290 "id": 1, 00:30:50.290 "result": true 00:30:50.290 } 00:30:50.290 00:30:50.290 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:50.290 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:50.290 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:50.290 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.552 INFO: Setting log level to 20 00:30:50.552 INFO: Setting log level to 20 00:30:50.552 INFO: Log level set to 20 00:30:50.552 INFO: Log level set to 20 00:30:50.552 INFO: Requests: 00:30:50.552 { 00:30:50.552 "jsonrpc": "2.0", 00:30:50.552 "method": "framework_start_init", 00:30:50.552 "id": 1 00:30:50.552 } 00:30:50.552 00:30:50.552 INFO: Requests: 00:30:50.552 { 00:30:50.552 "jsonrpc": "2.0", 00:30:50.552 "method": "framework_start_init", 00:30:50.552 "id": 1 00:30:50.552 } 00:30:50.552 00:30:50.552 [2024-07-16 00:07:05.546658] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:50.552 INFO: response: 00:30:50.552 { 00:30:50.552 "jsonrpc": "2.0", 00:30:50.552 "id": 1, 00:30:50.552 "result": true 00:30:50.552 } 00:30:50.552 00:30:50.552 INFO: response: 00:30:50.552 { 00:30:50.552 "jsonrpc": "2.0", 00:30:50.552 "id": 1, 00:30:50.552 "result": true 00:30:50.552 } 00:30:50.552 00:30:50.552 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:50.552 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.552 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:50.552 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.552 INFO: Setting log level to 40 00:30:50.552 INFO: Setting log level to 40 00:30:50.552 INFO: Setting log level to 40 00:30:50.552 [2024-07-16 00:07:05.559989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.552 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:50.552 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:50.552 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:50.552 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.552 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:50.552 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:50.552 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 Nvme0n1 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:50.814 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:50.814 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:50.814 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 [2024-07-16 00:07:05.949542] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:50.814 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 [ 00:30:50.814 { 00:30:50.814 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:50.814 "subtype": "Discovery", 00:30:50.814 "listen_addresses": [], 00:30:50.814 "allow_any_host": true, 00:30:50.814 "hosts": [] 00:30:50.814 }, 00:30:50.814 { 00:30:50.814 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.814 "subtype": "NVMe", 00:30:50.814 "listen_addresses": [ 00:30:50.814 { 00:30:50.814 "trtype": "TCP", 00:30:50.814 "adrfam": "IPv4", 00:30:50.814 "traddr": "10.0.0.2", 00:30:50.814 "trsvcid": "4420" 00:30:50.814 } 00:30:50.814 ], 00:30:50.814 "allow_any_host": true, 00:30:50.814 "hosts": [], 00:30:50.814 "serial_number": "SPDK00000000000001", 00:30:50.814 "model_number": "SPDK bdev Controller", 00:30:50.814 "max_namespaces": 1, 00:30:50.814 "min_cntlid": 1, 00:30:50.814 "max_cntlid": 65519, 00:30:50.814 "namespaces": [ 00:30:50.814 { 00:30:50.814 "nsid": 1, 00:30:50.814 "bdev_name": "Nvme0n1", 00:30:50.814 "name": "Nvme0n1", 00:30:50.814 "nguid": "3634473052605494002538450000002B", 00:30:50.814 "uuid": "36344730-5260-5494-0025-38450000002b" 00:30:50.814 } 00:30:50.814 ] 00:30:50.814 } 00:30:50.814 ] 00:30:50.814 00:07:05 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:50.814 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:50.814 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:50.814 00:07:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:51.075 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:51.075 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:51.075 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:51.075 00:07:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:51.075 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:51.075 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:51.075 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:51.075 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:51.075 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:51.075 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:51.075 rmmod nvme_tcp 00:30:51.075 rmmod nvme_fabrics 00:30:51.075 rmmod nvme_keyring 00:30:51.335 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:51.335 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:51.335 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:51.335 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 674505 ']' 00:30:51.335 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 674505 00:30:51.335 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@942 -- # '[' -z 674505 ']' 00:30:51.335 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # kill -0 674505 00:30:51.336 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # uname 00:30:51.336 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:30:51.336 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 674505 00:30:51.336 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:30:51.336 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:30:51.336 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # echo 'killing process with pid 674505' 00:30:51.336 killing process with pid 674505 00:30:51.336 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@961 -- # kill 674505 00:30:51.336 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # wait 674505 00:30:51.596 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:51.596 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:51.596 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:51.596 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:51.596 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:51.596 00:07:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.596 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:51.596 00:07:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.506 00:07:08 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:53.506 00:30:53.506 real 0m13.637s 00:30:53.506 user 0m9.739s 00:30:53.506 sys 0m6.852s 00:30:53.506 00:07:08 nvmf_identify_passthru -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:53.506 00:07:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:53.506 ************************************ 00:30:53.506 END TEST nvmf_identify_passthru 00:30:53.506 ************************************ 00:30:53.767 00:07:08 -- common/autotest_common.sh@1136 -- # return 0 00:30:53.767 00:07:08 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:53.767 00:07:08 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:30:53.767 00:07:08 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:30:53.767 00:07:08 -- common/autotest_common.sh@10 -- # set +x 00:30:53.767 ************************************ 00:30:53.767 START TEST nvmf_dif 00:30:53.767 ************************************ 00:30:53.767 00:07:08 nvmf_dif -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:53.767 * Looking for test storage... 00:30:53.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:53.767 00:07:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:53.767 00:07:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.767 00:07:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.767 00:07:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.767 00:07:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.767 00:07:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.767 00:07:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.767 00:07:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:53.767 00:07:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:53.767 00:07:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:53.767 00:07:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:53.767 00:07:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:53.767 00:07:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:53.767 00:07:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.767 00:07:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:53.767 00:07:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:53.767 00:07:08 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:53.767 00:07:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:01.909 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:01.909 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:01.909 Found net devices under 0000:31:00.0: cvl_0_0 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:01.909 Found net devices under 0000:31:00.1: cvl_0_1 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:01.909 00:07:16 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.909 00:07:17 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.909 00:07:17 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:02.171 00:07:17 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:02.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:02.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:31:02.171 00:31:02.171 --- 10.0.0.2 ping statistics --- 00:31:02.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.171 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:31:02.171 00:07:17 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:02.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:02.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:31:02.171 00:31:02.171 --- 10.0.0.1 ping statistics --- 00:31:02.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:02.171 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:31:02.171 00:07:17 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:02.171 00:07:17 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:02.171 00:07:17 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:02.171 00:07:17 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:06.377 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:06.377 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:06.377 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:06.377 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:06.377 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:06.377 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:06.377 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:06.378 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:06.378 00:07:21 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:06.378 00:07:21 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:06.378 00:07:21 nvmf_dif -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:06.378 00:07:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=681153 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 681153 00:31:06.378 00:07:21 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:06.378 00:07:21 nvmf_dif -- common/autotest_common.sh@823 -- # '[' -z 681153 ']' 00:31:06.378 00:07:21 nvmf_dif -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.378 00:07:21 nvmf_dif -- common/autotest_common.sh@828 -- # local max_retries=100 00:31:06.378 00:07:21 nvmf_dif -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.378 00:07:21 nvmf_dif -- common/autotest_common.sh@832 -- # xtrace_disable 00:31:06.378 00:07:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.378 [2024-07-16 00:07:21.116885] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:31:06.378 [2024-07-16 00:07:21.116940] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.378 [2024-07-16 00:07:21.195415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.378 [2024-07-16 00:07:21.268464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.378 [2024-07-16 00:07:21.268503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.378 [2024-07-16 00:07:21.268513] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.378 [2024-07-16 00:07:21.268520] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.378 [2024-07-16 00:07:21.268526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.378 [2024-07-16 00:07:21.268543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@856 -- # return 0 00:31:06.949 00:07:21 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.949 00:07:21 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.949 00:07:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:06.949 00:07:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.949 [2024-07-16 00:07:21.923723] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:06.949 00:07:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:31:06.949 00:07:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.949 ************************************ 00:31:06.949 START TEST fio_dif_1_default 00:31:06.949 ************************************ 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1117 -- # fio_dif_1 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:06.949 bdev_null0 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:06.949 00:07:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:06.949 [2024-07-16 00:07:22.008054] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.949 00:07:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:06.950 { 00:31:06.950 "params": { 00:31:06.950 "name": "Nvme$subsystem", 00:31:06.950 "trtype": "$TEST_TRANSPORT", 00:31:06.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.950 "adrfam": "ipv4", 00:31:06.950 "trsvcid": "$NVMF_PORT", 00:31:06.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.950 "hdgst": ${hdgst:-false}, 00:31:06.950 "ddgst": ${ddgst:-false} 00:31:06.950 }, 00:31:06.950 "method": "bdev_nvme_attach_controller" 00:31:06.950 } 00:31:06.950 EOF 00:31:06.950 )") 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local sanitizers 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # shift 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local asan_lib= 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # grep libasan 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:06.950 "params": { 00:31:06.950 "name": "Nvme0", 00:31:06.950 "trtype": "tcp", 00:31:06.950 "traddr": "10.0.0.2", 00:31:06.950 "adrfam": "ipv4", 00:31:06.950 "trsvcid": "4420", 00:31:06.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.950 "hdgst": false, 00:31:06.950 "ddgst": false 00:31:06.950 }, 00:31:06.950 "method": "bdev_nvme_attach_controller" 00:31:06.950 }' 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:06.950 00:07:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.548 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:07.548 fio-3.35 00:31:07.548 Starting 1 thread 00:31:19.783 00:31:19.783 filename0: (groupid=0, jobs=1): err= 0: pid=681697: Tue Jul 16 00:07:33 2024 00:31:19.783 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10020msec) 00:31:19.783 slat (nsec): min=5400, max=32627, avg=6135.55, stdev=1360.84 00:31:19.783 clat (usec): min=734, max=45161, avg=21576.39, stdev=20415.20 00:31:19.783 lat (usec): min=742, max=45193, avg=21582.52, stdev=20415.20 00:31:19.783 clat percentiles (usec): 00:31:19.783 | 1.00th=[ 906], 5.00th=[ 930], 10.00th=[ 1029], 20.00th=[ 1074], 00:31:19.784 | 30.00th=[ 1090], 40.00th=[ 1139], 50.00th=[41157], 60.00th=[41681], 00:31:19.784 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:31:19.784 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:31:19.784 | 99.99th=[45351] 00:31:19.784 bw ( KiB/s): min= 672, max= 768, per=99.88%, avg=740.80, stdev=33.28, samples=20 00:31:19.784 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:31:19.784 lat (usec) : 750=0.11%, 1000=8.73% 00:31:19.784 lat (msec) : 2=40.95%, 50=50.22% 00:31:19.784 cpu : usr=95.43%, sys=4.36%, ctx=13, majf=0, minf=227 00:31:19.784 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.784 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.784 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:19.784 00:31:19.784 Run status group 0 (all jobs): 00:31:19.784 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10020-10020msec 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:31:19.784 real 0m11.292s 00:31:19.784 user 0m28.302s 00:31:19.784 sys 0m0.763s 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1118 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 ************************************ 00:31:19.784 END TEST fio_dif_1_default 00:31:19.784 ************************************ 00:31:19.784 00:07:33 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:31:19.784 00:07:33 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:19.784 00:07:33 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:31:19.784 00:07:33 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 ************************************ 00:31:19.784 START TEST fio_dif_1_multi_subsystems 00:31:19.784 ************************************ 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1117 -- # fio_dif_1_multi_subsystems 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 bdev_null0 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 [2024-07-16 00:07:33.381802] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 bdev_null1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.784 { 00:31:19.784 "params": { 00:31:19.784 "name": "Nvme$subsystem", 00:31:19.784 "trtype": "$TEST_TRANSPORT", 00:31:19.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.784 "adrfam": "ipv4", 00:31:19.784 "trsvcid": "$NVMF_PORT", 00:31:19.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.784 "hdgst": ${hdgst:-false}, 00:31:19.784 "ddgst": ${ddgst:-false} 00:31:19.784 }, 00:31:19.784 "method": "bdev_nvme_attach_controller" 00:31:19.784 } 00:31:19.784 EOF 00:31:19.784 )") 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:19.784 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local sanitizers 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # shift 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local asan_lib= 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # grep libasan 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:19.785 { 00:31:19.785 "params": { 00:31:19.785 "name": "Nvme$subsystem", 00:31:19.785 "trtype": "$TEST_TRANSPORT", 00:31:19.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.785 "adrfam": "ipv4", 00:31:19.785 "trsvcid": "$NVMF_PORT", 00:31:19.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.785 "hdgst": ${hdgst:-false}, 00:31:19.785 "ddgst": ${ddgst:-false} 00:31:19.785 }, 00:31:19.785 "method": "bdev_nvme_attach_controller" 00:31:19.785 } 00:31:19.785 EOF 00:31:19.785 )") 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:19.785 "params": { 00:31:19.785 "name": "Nvme0", 00:31:19.785 "trtype": "tcp", 00:31:19.785 "traddr": "10.0.0.2", 00:31:19.785 "adrfam": "ipv4", 00:31:19.785 "trsvcid": "4420", 00:31:19.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.785 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:19.785 "hdgst": false, 00:31:19.785 "ddgst": false 00:31:19.785 }, 00:31:19.785 "method": "bdev_nvme_attach_controller" 00:31:19.785 },{ 00:31:19.785 "params": { 00:31:19.785 "name": "Nvme1", 00:31:19.785 "trtype": "tcp", 00:31:19.785 "traddr": "10.0.0.2", 00:31:19.785 "adrfam": "ipv4", 00:31:19.785 "trsvcid": "4420", 00:31:19.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:19.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:19.785 "hdgst": false, 00:31:19.785 "ddgst": false 00:31:19.785 }, 00:31:19.785 "method": "bdev_nvme_attach_controller" 00:31:19.785 }' 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:19.785 00:07:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.785 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:19.785 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:19.785 fio-3.35 00:31:19.785 Starting 2 threads 00:31:29.789 00:31:29.789 filename0: (groupid=0, jobs=1): err= 0: pid=684197: Tue Jul 16 00:07:44 2024 00:31:29.789 read: IOPS=185, BW=742KiB/s (759kB/s)(7424KiB/10012msec) 00:31:29.789 slat (nsec): min=5401, max=37296, avg=6498.91, stdev=2009.29 00:31:29.789 clat (usec): min=781, max=42663, avg=21558.98, stdev=20365.23 00:31:29.789 lat (usec): min=786, max=42689, avg=21565.48, stdev=20365.08 00:31:29.789 clat percentiles (usec): 00:31:29.789 | 1.00th=[ 824], 5.00th=[ 963], 10.00th=[ 1057], 20.00th=[ 1090], 00:31:29.789 | 30.00th=[ 1123], 40.00th=[ 1205], 50.00th=[41157], 60.00th=[41681], 00:31:29.789 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:31:29.789 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:29.789 | 99.99th=[42730] 00:31:29.789 bw ( KiB/s): min= 672, max= 768, per=66.00%, avg=740.80, stdev=34.86, samples=20 00:31:29.789 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:31:29.789 lat (usec) : 1000=5.87% 00:31:29.789 lat (msec) : 2=43.91%, 50=50.22% 00:31:29.789 cpu : usr=96.78%, sys=3.00%, ctx=14, majf=0, minf=78 00:31:29.789 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.789 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.789 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:29.789 filename1: (groupid=0, jobs=1): err= 0: pid=684198: Tue Jul 16 00:07:44 2024 00:31:29.789 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10032msec) 00:31:29.789 slat (nsec): min=5396, max=37750, avg=6668.30, stdev=2668.20 00:31:29.789 clat (usec): min=40976, max=42735, avg=41954.63, stdev=170.72 00:31:29.789 lat (usec): min=40981, max=42762, avg=41961.29, stdev=170.87 00:31:29.789 clat percentiles (usec): 00:31:29.789 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:31:29.790 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:29.790 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:29.790 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:29.790 | 99.99th=[42730] 00:31:29.790 bw ( KiB/s): min= 352, max= 384, per=33.89%, avg=380.80, stdev= 9.85, samples=20 00:31:29.790 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:29.790 lat (msec) : 50=100.00% 00:31:29.790 cpu : usr=95.99%, sys=3.79%, ctx=20, majf=0, minf=170 00:31:29.790 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:29.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.790 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.790 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:29.790 00:31:29.790 Run status group 0 (all jobs): 00:31:29.790 READ: bw=1121KiB/s (1148kB/s), 381KiB/s-742KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10012-10032msec 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:29.790 00:31:29.790 real 0m11.535s 00:31:29.790 user 0m32.403s 00:31:29.790 sys 0m1.048s 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1118 -- # xtrace_disable 00:31:29.790 00:07:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:29.790 ************************************ 00:31:29.790 END TEST fio_dif_1_multi_subsystems 00:31:29.790 ************************************ 00:31:29.790 00:07:44 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:31:29.790 00:07:44 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:29.790 00:07:44 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:31:29.790 00:07:44 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:31:29.790 00:07:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:29.790 ************************************ 00:31:29.790 START TEST fio_dif_rand_params 00:31:29.790 ************************************ 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1117 -- # fio_dif_rand_params 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.790 bdev_null0 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:29.790 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.051 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:30.051 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.051 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:30.051 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.051 [2024-07-16 00:07:44.995567] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.051 00:07:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:30.051 00:07:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.051 { 00:31:30.051 "params": { 00:31:30.051 "name": "Nvme$subsystem", 00:31:30.051 "trtype": "$TEST_TRANSPORT", 00:31:30.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.051 "adrfam": "ipv4", 00:31:30.051 "trsvcid": "$NVMF_PORT", 00:31:30.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.051 "hdgst": ${hdgst:-false}, 00:31:30.051 "ddgst": ${ddgst:-false} 00:31:30.051 }, 00:31:30.051 "method": "bdev_nvme_attach_controller" 00:31:30.051 } 00:31:30.051 EOF 00:31:30.051 )") 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:30.051 00:07:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:30.051 "params": { 00:31:30.051 "name": "Nvme0", 00:31:30.051 "trtype": "tcp", 00:31:30.051 "traddr": "10.0.0.2", 00:31:30.051 "adrfam": "ipv4", 00:31:30.051 "trsvcid": "4420", 00:31:30.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.052 "hdgst": false, 00:31:30.052 "ddgst": false 00:31:30.052 }, 00:31:30.052 "method": "bdev_nvme_attach_controller" 00:31:30.052 }' 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.052 00:07:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.311 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:30.311 ... 00:31:30.311 fio-3.35 00:31:30.311 Starting 3 threads 00:31:36.888 00:31:36.888 filename0: (groupid=0, jobs=1): err= 0: pid=686411: Tue Jul 16 00:07:50 2024 00:31:36.888 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(124MiB/5048msec) 00:31:36.888 slat (nsec): min=5459, max=33215, avg=8936.11, stdev=1871.15 00:31:36.888 clat (usec): min=5635, max=57333, avg=15206.91, stdev=11874.03 00:31:36.888 lat (usec): min=5644, max=57343, avg=15215.85, stdev=11874.15 00:31:36.888 clat percentiles (usec): 00:31:36.888 | 1.00th=[ 6259], 5.00th=[ 7242], 10.00th=[ 8160], 20.00th=[ 9241], 00:31:36.888 | 30.00th=[10159], 40.00th=[11076], 50.00th=[11994], 60.00th=[12780], 00:31:36.888 | 70.00th=[13829], 80.00th=[15008], 90.00th=[16909], 95.00th=[51643], 00:31:36.888 | 99.00th=[54264], 99.50th=[54789], 99.90th=[57410], 99.95th=[57410], 00:31:36.888 | 99.99th=[57410] 00:31:36.888 bw ( KiB/s): min=17152, max=38400, per=30.31%, avg=25344.00, stdev=5850.16, samples=10 00:31:36.888 iops : min= 134, max= 300, avg=198.00, stdev=45.70, samples=10 00:31:36.888 lat (msec) : 10=28.83%, 20=62.20%, 50=0.91%, 100=8.06% 00:31:36.888 cpu : usr=95.48%, sys=4.26%, ctx=7, majf=0, minf=103 00:31:36.888 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.888 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.888 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:36.888 filename0: (groupid=0, jobs=1): err= 0: pid=686412: Tue Jul 16 00:07:50 2024 00:31:36.888 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(137MiB/5046msec) 00:31:36.888 slat (nsec): min=5524, max=33804, avg=8095.65, stdev=1760.16 00:31:36.888 clat (usec): min=5413, max=91889, avg=13736.92, stdev=10596.08 00:31:36.888 lat (usec): min=5422, max=91898, avg=13745.02, stdev=10596.12 00:31:36.888 clat percentiles (usec): 00:31:36.888 | 1.00th=[ 6128], 5.00th=[ 6652], 10.00th=[ 7701], 20.00th=[ 8848], 00:31:36.888 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11338], 60.00th=[11994], 00:31:36.888 | 70.00th=[12780], 80.00th=[13698], 90.00th=[15533], 95.00th=[49546], 00:31:36.888 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56886], 99.95th=[91751], 00:31:36.888 | 99.99th=[91751] 00:31:36.888 bw ( KiB/s): min=20480, max=33536, per=33.55%, avg=28057.60, stdev=5084.61, samples=10 00:31:36.888 iops : min= 160, max= 262, avg=219.20, stdev=39.72, samples=10 00:31:36.888 lat (msec) : 10=32.97%, 20=60.38%, 50=2.19%, 100=4.46% 00:31:36.888 cpu : usr=95.98%, sys=3.77%, ctx=13, majf=0, minf=103 00:31:36.888 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.888 issued rwts: total=1098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.888 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:36.888 filename0: (groupid=0, jobs=1): err= 0: pid=686413: Tue Jul 16 00:07:50 2024 00:31:36.888 read: IOPS=241, BW=30.2MiB/s (31.6MB/s)(151MiB/5006msec) 00:31:36.888 slat (nsec): min=5439, max=31966, avg=7913.36, stdev=1623.28 00:31:36.888 clat (usec): min=4871, max=93378, avg=12418.84, stdev=11852.79 00:31:36.888 lat (usec): min=4879, max=93385, avg=12426.75, stdev=11852.84 00:31:36.889 clat percentiles (usec): 00:31:36.889 | 1.00th=[ 5080], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7308], 00:31:36.889 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9634], 00:31:36.889 | 70.00th=[10290], 80.00th=[11076], 90.00th=[13304], 95.00th=[49546], 00:31:36.889 | 99.00th=[52167], 99.50th=[53216], 99.90th=[91751], 99.95th=[93848], 00:31:36.889 | 99.99th=[93848] 00:31:36.889 bw ( KiB/s): min=21248, max=37376, per=36.89%, avg=30848.00, stdev=4650.86, samples=10 00:31:36.889 iops : min= 166, max= 292, avg=241.00, stdev=36.33, samples=10 00:31:36.889 lat (msec) : 10=65.89%, 20=25.83%, 50=3.97%, 100=4.30% 00:31:36.889 cpu : usr=96.56%, sys=3.18%, ctx=8, majf=0, minf=76 00:31:36.889 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.889 issued rwts: total=1208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.889 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:36.889 00:31:36.889 Run status group 0 (all jobs): 00:31:36.889 READ: bw=81.7MiB/s (85.6MB/s), 24.6MiB/s-30.2MiB/s (25.8MB/s-31.6MB/s), io=412MiB (432MB), run=5006-5048msec 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 bdev_null0 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 [2024-07-16 00:07:51.180659] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 bdev_null1 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:36.889 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.890 bdev_null2 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.890 { 00:31:36.890 "params": { 00:31:36.890 "name": "Nvme$subsystem", 00:31:36.890 "trtype": "$TEST_TRANSPORT", 00:31:36.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.890 "adrfam": "ipv4", 00:31:36.890 "trsvcid": "$NVMF_PORT", 00:31:36.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.890 "hdgst": ${hdgst:-false}, 00:31:36.890 "ddgst": ${ddgst:-false} 00:31:36.890 }, 00:31:36.890 "method": "bdev_nvme_attach_controller" 00:31:36.890 } 00:31:36.890 EOF 00:31:36.890 )") 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.890 { 00:31:36.890 "params": { 00:31:36.890 "name": "Nvme$subsystem", 00:31:36.890 "trtype": "$TEST_TRANSPORT", 00:31:36.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.890 "adrfam": "ipv4", 00:31:36.890 "trsvcid": "$NVMF_PORT", 00:31:36.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.890 "hdgst": ${hdgst:-false}, 00:31:36.890 "ddgst": ${ddgst:-false} 00:31:36.890 }, 00:31:36.890 "method": "bdev_nvme_attach_controller" 00:31:36.890 } 00:31:36.890 EOF 00:31:36.890 )") 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.890 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.890 { 00:31:36.890 "params": { 00:31:36.890 "name": "Nvme$subsystem", 00:31:36.890 "trtype": "$TEST_TRANSPORT", 00:31:36.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.890 "adrfam": "ipv4", 00:31:36.890 "trsvcid": "$NVMF_PORT", 00:31:36.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.890 "hdgst": ${hdgst:-false}, 00:31:36.891 "ddgst": ${ddgst:-false} 00:31:36.891 }, 00:31:36.891 "method": "bdev_nvme_attach_controller" 00:31:36.891 } 00:31:36.891 EOF 00:31:36.891 )") 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:36.891 "params": { 00:31:36.891 "name": "Nvme0", 00:31:36.891 "trtype": "tcp", 00:31:36.891 "traddr": "10.0.0.2", 00:31:36.891 "adrfam": "ipv4", 00:31:36.891 "trsvcid": "4420", 00:31:36.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.891 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:36.891 "hdgst": false, 00:31:36.891 "ddgst": false 00:31:36.891 }, 00:31:36.891 "method": "bdev_nvme_attach_controller" 00:31:36.891 },{ 00:31:36.891 "params": { 00:31:36.891 "name": "Nvme1", 00:31:36.891 "trtype": "tcp", 00:31:36.891 "traddr": "10.0.0.2", 00:31:36.891 "adrfam": "ipv4", 00:31:36.891 "trsvcid": "4420", 00:31:36.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:36.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:36.891 "hdgst": false, 00:31:36.891 "ddgst": false 00:31:36.891 }, 00:31:36.891 "method": "bdev_nvme_attach_controller" 00:31:36.891 },{ 00:31:36.891 "params": { 00:31:36.891 "name": "Nvme2", 00:31:36.891 "trtype": "tcp", 00:31:36.891 "traddr": "10.0.0.2", 00:31:36.891 "adrfam": "ipv4", 00:31:36.891 "trsvcid": "4420", 00:31:36.891 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:36.891 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:36.891 "hdgst": false, 00:31:36.891 "ddgst": false 00:31:36.891 }, 00:31:36.891 "method": "bdev_nvme_attach_controller" 00:31:36.891 }' 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:36.891 00:07:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.891 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:36.891 ... 00:31:36.891 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:36.891 ... 00:31:36.891 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:36.891 ... 00:31:36.891 fio-3.35 00:31:36.891 Starting 24 threads 00:31:49.112 00:31:49.112 filename0: (groupid=0, jobs=1): err= 0: pid=687903: Tue Jul 16 00:08:02 2024 00:31:49.112 read: IOPS=554, BW=2219KiB/s (2272kB/s)(21.7MiB/10008msec) 00:31:49.112 slat (nsec): min=5493, max=89242, avg=11529.21, stdev=9827.90 00:31:49.112 clat (usec): min=4048, max=34105, avg=28747.16, stdev=5601.14 00:31:49.112 lat (usec): min=4072, max=34115, avg=28758.69, stdev=5602.99 00:31:49.112 clat percentiles (usec): 00:31:49.112 | 1.00th=[ 6128], 5.00th=[19792], 10.00th=[20841], 20.00th=[22938], 00:31:49.112 | 30.00th=[24773], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:49.112 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:49.112 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[34341], 00:31:49.112 | 99.99th=[34341] 00:31:49.112 bw ( KiB/s): min= 1920, max= 2816, per=4.64%, avg=2214.40, stdev=266.24, samples=20 00:31:49.112 iops : min= 480, max= 704, avg=553.60, stdev=66.56, samples=20 00:31:49.112 lat (msec) : 10=1.44%, 20=5.21%, 50=93.35% 00:31:49.112 cpu : usr=99.10%, sys=0.59%, ctx=14, majf=0, minf=9 00:31:49.112 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:49.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.112 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.112 issued rwts: total=5552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.112 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.112 filename0: (groupid=0, jobs=1): err= 0: pid=687904: Tue Jul 16 00:08:02 2024 00:31:49.112 read: IOPS=483, BW=1935KiB/s (1982kB/s)(18.9MiB/10003msec) 00:31:49.112 slat (nsec): min=5422, max=99805, avg=18117.18, stdev=13610.08 00:31:49.112 clat (usec): min=6488, max=56858, avg=32956.89, stdev=4225.56 00:31:49.112 lat (usec): min=6496, max=56864, avg=32975.01, stdev=4225.08 00:31:49.112 clat percentiles (usec): 00:31:49.112 | 1.00th=[20579], 5.00th=[30016], 10.00th=[31851], 20.00th=[31851], 00:31:49.112 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:49.112 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35390], 95.00th=[42206], 00:31:49.112 | 99.00th=[50070], 99.50th=[54789], 99.90th=[56886], 99.95th=[56886], 00:31:49.112 | 99.99th=[56886] 00:31:49.112 bw ( KiB/s): min= 1696, max= 2048, per=4.04%, avg=1926.05, stdev=88.87, samples=19 00:31:49.112 iops : min= 424, max= 512, avg=481.47, stdev=22.28, samples=19 00:31:49.112 lat (msec) : 10=0.12%, 20=0.74%, 50=97.91%, 100=1.22% 00:31:49.112 cpu : usr=98.34%, sys=1.03%, ctx=30, majf=0, minf=9 00:31:49.112 IO depths : 1=1.3%, 2=2.8%, 4=7.9%, 8=73.5%, 16=14.5%, 32=0.0%, >=64=0.0% 00:31:49.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.112 complete : 0=0.0%, 4=90.6%, 8=6.8%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.112 issued rwts: total=4840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.112 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.112 filename0: (groupid=0, jobs=1): err= 0: pid=687905: Tue Jul 16 00:08:02 2024 00:31:49.112 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10013msec) 00:31:49.112 slat (nsec): min=5694, max=99268, avg=15987.06, stdev=11720.52 00:31:49.112 clat (usec): min=17625, max=53384, avg=32362.11, stdev=1703.62 00:31:49.112 lat (usec): min=17631, max=53406, avg=32378.09, stdev=1703.24 00:31:49.112 clat percentiles (usec): 00:31:49.112 | 1.00th=[27919], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.112 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.112 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.112 | 99.00th=[40633], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:31:49.112 | 99.99th=[53216] 00:31:49.112 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1964.95, stdev=62.53, samples=20 00:31:49.112 iops : min= 480, max= 512, avg=491.20, stdev=15.66, samples=20 00:31:49.112 lat (msec) : 20=0.12%, 50=99.84%, 100=0.04% 00:31:49.112 cpu : usr=98.98%, sys=0.66%, ctx=60, majf=0, minf=9 00:31:49.112 IO depths : 1=5.0%, 2=11.1%, 4=24.9%, 8=51.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:49.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.112 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.112 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.112 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.112 filename0: (groupid=0, jobs=1): err= 0: pid=687906: Tue Jul 16 00:08:02 2024 00:31:49.112 read: IOPS=490, BW=1961KiB/s (2008kB/s)(19.2MiB/10005msec) 00:31:49.112 slat (nsec): min=5472, max=90353, avg=16952.35, stdev=14181.48 00:31:49.112 clat (usec): min=10300, max=58510, avg=32564.74, stdev=2584.22 00:31:49.112 lat (usec): min=10308, max=58525, avg=32581.69, stdev=2583.37 00:31:49.112 clat percentiles (usec): 00:31:49.112 | 1.00th=[28181], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:31:49.112 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:49.112 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:31:49.112 | 99.00th=[41681], 99.50th=[51119], 99.90th=[58459], 99.95th=[58459], 00:31:49.112 | 99.99th=[58459] 00:31:49.112 bw ( KiB/s): min= 1792, max= 2000, per=4.11%, avg=1960.00, stdev=53.32, samples=20 00:31:49.112 iops : min= 448, max= 500, avg=490.00, stdev=13.33, samples=20 00:31:49.112 lat (msec) : 20=0.45%, 50=98.86%, 100=0.69% 00:31:49.112 cpu : usr=99.19%, sys=0.52%, ctx=12, majf=0, minf=9 00:31:49.112 IO depths : 1=0.1%, 2=0.2%, 4=1.2%, 8=80.6%, 16=17.9%, 32=0.0%, >=64=0.0% 00:31:49.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.112 complete : 0=0.0%, 4=89.6%, 8=9.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.112 issued rwts: total=4906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.112 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.112 filename0: (groupid=0, jobs=1): err= 0: pid=687907: Tue Jul 16 00:08:02 2024 00:31:49.112 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10008msec) 00:31:49.112 slat (nsec): min=5607, max=85519, avg=16249.04, stdev=11768.56 00:31:49.112 clat (usec): min=11455, max=52203, avg=32341.40, stdev=1076.78 00:31:49.112 lat (usec): min=11462, max=52219, avg=32357.64, stdev=1077.13 00:31:49.112 clat percentiles (usec): 00:31:49.112 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.113 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.113 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.113 | 99.00th=[34341], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:31:49.113 | 99.99th=[52167] 00:31:49.113 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1964.95, stdev=62.53, samples=20 00:31:49.113 iops : min= 480, max= 512, avg=491.20, stdev=15.66, samples=20 00:31:49.113 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:31:49.113 cpu : usr=98.03%, sys=1.14%, ctx=114, majf=0, minf=9 00:31:49.113 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:49.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.113 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.113 filename0: (groupid=0, jobs=1): err= 0: pid=687908: Tue Jul 16 00:08:02 2024 00:31:49.113 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10015msec) 00:31:49.113 slat (nsec): min=5573, max=83838, avg=16725.83, stdev=12764.59 00:31:49.113 clat (usec): min=10791, max=67392, avg=32532.72, stdev=3441.76 00:31:49.113 lat (usec): min=10802, max=67412, avg=32549.45, stdev=3441.50 00:31:49.113 clat percentiles (usec): 00:31:49.113 | 1.00th=[14353], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.113 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:49.113 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:31:49.113 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53740], 99.95th=[67634], 00:31:49.113 | 99.99th=[67634] 00:31:49.113 bw ( KiB/s): min= 1840, max= 2024, per=4.10%, avg=1958.40, stdev=51.43, samples=20 00:31:49.113 iops : min= 460, max= 506, avg=489.60, stdev=12.86, samples=20 00:31:49.113 lat (msec) : 20=1.30%, 50=97.78%, 100=0.92% 00:31:49.113 cpu : usr=99.24%, sys=0.47%, ctx=11, majf=0, minf=9 00:31:49.113 IO depths : 1=0.1%, 2=0.2%, 4=3.0%, 8=80.0%, 16=16.7%, 32=0.0%, >=64=0.0% 00:31:49.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 complete : 0=0.0%, 4=89.6%, 8=8.7%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.113 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.113 filename0: (groupid=0, jobs=1): err= 0: pid=687909: Tue Jul 16 00:08:02 2024 00:31:49.113 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10004msec) 00:31:49.113 slat (nsec): min=5583, max=95620, avg=15722.80, stdev=11632.46 00:31:49.113 clat (usec): min=4007, max=34647, avg=31816.54, stdev=3540.93 00:31:49.113 lat (usec): min=4027, max=34655, avg=31832.26, stdev=3540.66 00:31:49.113 clat percentiles (usec): 00:31:49.113 | 1.00th=[ 6456], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:49.113 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.113 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.113 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:31:49.113 | 99.99th=[34866] 00:31:49.113 bw ( KiB/s): min= 1920, max= 2560, per=4.19%, avg=2000.84, stdev=149.09, samples=19 00:31:49.113 iops : min= 480, max= 640, avg=500.21, stdev=37.27, samples=19 00:31:49.113 lat (msec) : 10=1.28%, 20=0.64%, 50=98.08% 00:31:49.113 cpu : usr=98.19%, sys=1.12%, ctx=512, majf=0, minf=9 00:31:49.113 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:49.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.113 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.113 filename0: (groupid=0, jobs=1): err= 0: pid=687910: Tue Jul 16 00:08:02 2024 00:31:49.113 read: IOPS=503, BW=2016KiB/s (2064kB/s)(19.7MiB/10028msec) 00:31:49.113 slat (nsec): min=5580, max=94856, avg=12299.57, stdev=10879.31 00:31:49.113 clat (usec): min=16271, max=54682, avg=31642.22, stdev=4686.52 00:31:49.113 lat (usec): min=16280, max=54702, avg=31654.52, stdev=4686.65 00:31:49.113 clat percentiles (usec): 00:31:49.113 | 1.00th=[20317], 5.00th=[21890], 10.00th=[24511], 20.00th=[31589], 00:31:49.113 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.113 | 70.00th=[32375], 80.00th=[33162], 90.00th=[33817], 95.00th=[38011], 00:31:49.113 | 99.00th=[47973], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:31:49.113 | 99.99th=[54789] 00:31:49.113 bw ( KiB/s): min= 1792, max= 2336, per=4.22%, avg=2015.20, stdev=122.57, samples=20 00:31:49.113 iops : min= 448, max= 584, avg=503.80, stdev=30.64, samples=20 00:31:49.113 lat (msec) : 20=0.87%, 50=98.26%, 100=0.87% 00:31:49.113 cpu : usr=99.09%, sys=0.61%, ctx=12, majf=0, minf=9 00:31:49.113 IO depths : 1=3.5%, 2=7.0%, 4=16.6%, 8=63.3%, 16=9.7%, 32=0.0%, >=64=0.0% 00:31:49.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 complete : 0=0.0%, 4=92.0%, 8=2.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 issued rwts: total=5054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.113 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.113 filename1: (groupid=0, jobs=1): err= 0: pid=687911: Tue Jul 16 00:08:02 2024 00:31:49.113 read: IOPS=526, BW=2105KiB/s (2155kB/s)(20.6MiB/10004msec) 00:31:49.113 slat (nsec): min=5588, max=82758, avg=13640.27, stdev=11813.49 00:31:49.113 clat (usec): min=3993, max=34429, avg=30289.24, stdev=4804.20 00:31:49.113 lat (usec): min=4019, max=34436, avg=30302.88, stdev=4805.54 00:31:49.113 clat percentiles (usec): 00:31:49.113 | 1.00th=[ 6652], 5.00th=[20317], 10.00th=[22414], 20.00th=[31589], 00:31:49.113 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:49.113 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:49.113 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:31:49.113 | 99.99th=[34341] 00:31:49.113 bw ( KiB/s): min= 1920, max= 2560, per=4.38%, avg=2088.42, stdev=170.95, samples=19 00:31:49.113 iops : min= 480, max= 640, avg=522.11, stdev=42.74, samples=19 00:31:49.113 lat (msec) : 4=0.02%, 10=1.20%, 20=1.67%, 50=97.11% 00:31:49.113 cpu : usr=98.69%, sys=0.96%, ctx=43, majf=0, minf=9 00:31:49.113 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:49.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.113 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.113 filename1: (groupid=0, jobs=1): err= 0: pid=687912: Tue Jul 16 00:08:02 2024 00:31:49.113 read: IOPS=497, BW=1992KiB/s (2039kB/s)(19.5MiB/10046msec) 00:31:49.113 slat (nsec): min=5325, max=91358, avg=14744.87, stdev=11470.38 00:31:49.113 clat (usec): min=14014, max=58288, avg=31901.18, stdev=3684.50 00:31:49.113 lat (usec): min=14020, max=58320, avg=31915.92, stdev=3685.44 00:31:49.113 clat percentiles (usec): 00:31:49.113 | 1.00th=[20055], 5.00th=[23725], 10.00th=[31065], 20.00th=[31851], 00:31:49.113 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.113 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.113 | 99.00th=[46400], 99.50th=[51643], 99.90th=[53740], 99.95th=[58459], 00:31:49.113 | 99.99th=[58459] 00:31:49.113 bw ( KiB/s): min= 1792, max= 2336, per=4.19%, avg=2000.00, stdev=113.02, samples=20 00:31:49.113 iops : min= 448, max= 584, avg=500.00, stdev=28.25, samples=20 00:31:49.113 lat (msec) : 20=1.20%, 50=98.00%, 100=0.80% 00:31:49.113 cpu : usr=99.18%, sys=0.53%, ctx=12, majf=0, minf=11 00:31:49.113 IO depths : 1=2.1%, 2=7.8%, 4=23.3%, 8=56.4%, 16=10.5%, 32=0.0%, >=64=0.0% 00:31:49.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.113 issued rwts: total=5002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.113 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.113 filename1: (groupid=0, jobs=1): err= 0: pid=687913: Tue Jul 16 00:08:02 2024 00:31:49.113 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10003msec) 00:31:49.113 slat (nsec): min=5580, max=88274, avg=18046.78, stdev=13402.62 00:31:49.113 clat (usec): min=6517, max=55121, avg=32304.77, stdev=2043.86 00:31:49.113 lat (usec): min=6526, max=55138, avg=32322.81, stdev=2043.28 00:31:49.113 clat percentiles (usec): 00:31:49.113 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.113 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.113 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.113 | 99.00th=[34341], 99.50th=[34341], 99.90th=[55313], 99.95th=[55313], 00:31:49.113 | 99.99th=[55313] 00:31:49.113 bw ( KiB/s): min= 1795, max= 2048, per=4.11%, avg=1960.58, stdev=74.17, samples=19 00:31:49.113 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:31:49.113 lat (msec) : 10=0.04%, 20=0.37%, 50=99.27%, 100=0.32% 00:31:49.113 cpu : usr=99.00%, sys=0.70%, ctx=12, majf=0, minf=9 00:31:49.113 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:49.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.114 filename1: (groupid=0, jobs=1): err= 0: pid=687914: Tue Jul 16 00:08:02 2024 00:31:49.114 read: IOPS=516, BW=2064KiB/s (2114kB/s)(20.2MiB/10010msec) 00:31:49.114 slat (nsec): min=5579, max=77526, avg=13896.92, stdev=10939.19 00:31:49.114 clat (usec): min=14789, max=53028, avg=30887.46, stdev=3895.96 00:31:49.114 lat (usec): min=14798, max=53049, avg=30901.36, stdev=3898.45 00:31:49.114 clat percentiles (usec): 00:31:49.114 | 1.00th=[19530], 5.00th=[21627], 10.00th=[23200], 20.00th=[31589], 00:31:49.114 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.114 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[33424], 00:31:49.114 | 99.00th=[34866], 99.50th=[38536], 99.90th=[52691], 99.95th=[53216], 00:31:49.114 | 99.99th=[53216] 00:31:49.114 bw ( KiB/s): min= 1840, max= 2864, per=4.32%, avg=2060.00, stdev=258.30, samples=20 00:31:49.114 iops : min= 460, max= 716, avg=515.00, stdev=64.57, samples=20 00:31:49.114 lat (msec) : 20=2.23%, 50=97.58%, 100=0.19% 00:31:49.114 cpu : usr=99.03%, sys=0.69%, ctx=10, majf=0, minf=9 00:31:49.114 IO depths : 1=5.2%, 2=10.5%, 4=22.2%, 8=54.8%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:49.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 issued rwts: total=5166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.114 filename1: (groupid=0, jobs=1): err= 0: pid=687915: Tue Jul 16 00:08:02 2024 00:31:49.114 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10003msec) 00:31:49.114 slat (nsec): min=5621, max=75583, avg=16322.18, stdev=11906.00 00:31:49.114 clat (usec): min=11739, max=55794, avg=32331.58, stdev=2002.20 00:31:49.114 lat (usec): min=11744, max=55811, avg=32347.90, stdev=2001.25 00:31:49.114 clat percentiles (usec): 00:31:49.114 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.114 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.114 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.114 | 99.00th=[34341], 99.50th=[34341], 99.90th=[55837], 99.95th=[55837], 00:31:49.114 | 99.99th=[55837] 00:31:49.114 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1960.42, stdev=74.55, samples=19 00:31:49.114 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:31:49.114 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:31:49.114 cpu : usr=99.29%, sys=0.43%, ctx=14, majf=0, minf=9 00:31:49.114 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:49.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.114 filename1: (groupid=0, jobs=1): err= 0: pid=687916: Tue Jul 16 00:08:02 2024 00:31:49.114 read: IOPS=496, BW=1988KiB/s (2036kB/s)(19.4MiB/10013msec) 00:31:49.114 slat (nsec): min=5649, max=84888, avg=14347.41, stdev=11123.60 00:31:49.114 clat (usec): min=11379, max=44607, avg=32081.86, stdev=2331.10 00:31:49.114 lat (usec): min=11402, max=44622, avg=32096.21, stdev=2331.36 00:31:49.114 clat percentiles (usec): 00:31:49.114 | 1.00th=[20841], 5.00th=[31327], 10.00th=[31851], 20.00th=[31851], 00:31:49.114 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.114 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.114 | 99.00th=[34866], 99.50th=[40633], 99.90th=[43779], 99.95th=[44303], 00:31:49.114 | 99.99th=[44827] 00:31:49.114 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1984.00, stdev=64.21, samples=20 00:31:49.114 iops : min= 480, max= 512, avg=496.00, stdev=16.05, samples=20 00:31:49.114 lat (msec) : 20=0.40%, 50=99.60% 00:31:49.114 cpu : usr=99.05%, sys=0.58%, ctx=67, majf=0, minf=9 00:31:49.114 IO depths : 1=5.4%, 2=11.6%, 4=24.9%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:49.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.114 filename1: (groupid=0, jobs=1): err= 0: pid=687917: Tue Jul 16 00:08:02 2024 00:31:49.114 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.2MiB/10004msec) 00:31:49.114 slat (nsec): min=5615, max=87847, avg=17921.04, stdev=14681.58 00:31:49.114 clat (usec): min=21941, max=39658, avg=32311.28, stdev=986.25 00:31:49.114 lat (usec): min=21961, max=39684, avg=32329.20, stdev=984.87 00:31:49.114 clat percentiles (usec): 00:31:49.114 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:31:49.114 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.114 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.114 | 99.00th=[34341], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:31:49.114 | 99.99th=[39584] 00:31:49.114 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1967.16, stdev=63.44, samples=19 00:31:49.114 iops : min= 480, max= 512, avg=491.79, stdev=15.86, samples=19 00:31:49.114 lat (msec) : 50=100.00% 00:31:49.114 cpu : usr=99.08%, sys=0.61%, ctx=45, majf=0, minf=9 00:31:49.114 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:49.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.114 filename1: (groupid=0, jobs=1): err= 0: pid=687918: Tue Jul 16 00:08:02 2024 00:31:49.114 read: IOPS=492, BW=1968KiB/s (2016kB/s)(19.2MiB/10014msec) 00:31:49.114 slat (nsec): min=5720, max=97739, avg=13599.40, stdev=10872.27 00:31:49.114 clat (usec): min=28359, max=46329, avg=32388.71, stdev=1082.33 00:31:49.114 lat (usec): min=28365, max=46353, avg=32402.31, stdev=1080.92 00:31:49.114 clat percentiles (usec): 00:31:49.114 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.114 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:49.114 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.114 | 99.00th=[34341], 99.50th=[35914], 99.90th=[46400], 99.95th=[46400], 00:31:49.114 | 99.99th=[46400] 00:31:49.114 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1964.95, stdev=62.53, samples=20 00:31:49.114 iops : min= 480, max= 512, avg=491.20, stdev=15.66, samples=20 00:31:49.114 lat (msec) : 50=100.00% 00:31:49.114 cpu : usr=99.26%, sys=0.42%, ctx=54, majf=0, minf=9 00:31:49.114 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:49.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.114 filename2: (groupid=0, jobs=1): err= 0: pid=687919: Tue Jul 16 00:08:02 2024 00:31:49.114 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10003msec) 00:31:49.114 slat (nsec): min=5597, max=97529, avg=15597.56, stdev=11219.25 00:31:49.114 clat (usec): min=13830, max=52174, avg=32320.65, stdev=1702.58 00:31:49.114 lat (usec): min=13838, max=52190, avg=32336.24, stdev=1702.08 00:31:49.114 clat percentiles (usec): 00:31:49.114 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.114 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.114 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.114 | 99.00th=[34341], 99.50th=[36439], 99.90th=[52167], 99.95th=[52167], 00:31:49.114 | 99.99th=[52167] 00:31:49.114 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1967.32, stdev=76.07, samples=19 00:31:49.114 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:31:49.114 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:31:49.114 cpu : usr=99.18%, sys=0.51%, ctx=59, majf=0, minf=9 00:31:49.114 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:49.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.114 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.114 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.114 filename2: (groupid=0, jobs=1): err= 0: pid=687920: Tue Jul 16 00:08:02 2024 00:31:49.114 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.2MiB/10018msec) 00:31:49.114 slat (nsec): min=5582, max=80594, avg=15596.86, stdev=13083.44 00:31:49.114 clat (usec): min=21077, max=67910, avg=32393.22, stdev=1728.06 00:31:49.114 lat (usec): min=21086, max=67929, avg=32408.82, stdev=1727.22 00:31:49.114 clat percentiles (usec): 00:31:49.114 | 1.00th=[27919], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.114 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:49.114 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:31:49.114 | 99.00th=[34866], 99.50th=[39584], 99.90th=[51119], 99.95th=[67634], 00:31:49.114 | 99.99th=[67634] 00:31:49.114 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1964.80, stdev=75.15, samples=20 00:31:49.114 iops : min= 448, max= 512, avg=491.20, stdev=18.79, samples=20 00:31:49.114 lat (msec) : 50=99.68%, 100=0.32% 00:31:49.115 cpu : usr=98.97%, sys=0.71%, ctx=59, majf=0, minf=9 00:31:49.115 IO depths : 1=6.1%, 2=12.1%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:49.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.115 filename2: (groupid=0, jobs=1): err= 0: pid=687921: Tue Jul 16 00:08:02 2024 00:31:49.115 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10006msec) 00:31:49.115 slat (nsec): min=5580, max=73650, avg=16570.69, stdev=11853.33 00:31:49.115 clat (usec): min=18284, max=40347, avg=32314.07, stdev=1169.71 00:31:49.115 lat (usec): min=18290, max=40368, avg=32330.64, stdev=1170.21 00:31:49.115 clat percentiles (usec): 00:31:49.115 | 1.00th=[31065], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.115 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.115 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.115 | 99.00th=[34341], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:31:49.115 | 99.99th=[40109] 00:31:49.115 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1967.16, stdev=63.44, samples=19 00:31:49.115 iops : min= 480, max= 512, avg=491.79, stdev=15.86, samples=19 00:31:49.115 lat (msec) : 20=0.32%, 50=99.68% 00:31:49.115 cpu : usr=98.02%, sys=1.11%, ctx=114, majf=0, minf=9 00:31:49.115 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:49.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.115 filename2: (groupid=0, jobs=1): err= 0: pid=687922: Tue Jul 16 00:08:02 2024 00:31:49.115 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10013msec) 00:31:49.115 slat (nsec): min=5591, max=84082, avg=12674.64, stdev=11281.95 00:31:49.115 clat (usec): min=17015, max=49865, avg=32398.57, stdev=2129.94 00:31:49.115 lat (usec): min=17020, max=49887, avg=32411.25, stdev=2129.84 00:31:49.115 clat percentiles (usec): 00:31:49.115 | 1.00th=[22414], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.115 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.115 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.115 | 99.00th=[42206], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:31:49.115 | 99.99th=[50070] 00:31:49.115 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1964.95, stdev=74.79, samples=20 00:31:49.115 iops : min= 448, max= 512, avg=491.20, stdev=18.79, samples=20 00:31:49.115 lat (msec) : 20=0.69%, 50=99.31% 00:31:49.115 cpu : usr=98.82%, sys=0.87%, ctx=60, majf=0, minf=9 00:31:49.115 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:49.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.115 filename2: (groupid=0, jobs=1): err= 0: pid=687923: Tue Jul 16 00:08:02 2024 00:31:49.115 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.4MiB/10002msec) 00:31:49.115 slat (nsec): min=5563, max=85195, avg=19230.96, stdev=14515.14 00:31:49.115 clat (usec): min=11426, max=55152, avg=32108.51, stdev=3521.02 00:31:49.115 lat (usec): min=11432, max=55169, avg=32127.75, stdev=3521.34 00:31:49.115 clat percentiles (usec): 00:31:49.115 | 1.00th=[20579], 5.00th=[26608], 10.00th=[31327], 20.00th=[31851], 00:31:49.115 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:49.115 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:31:49.115 | 99.00th=[46924], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:31:49.115 | 99.99th=[55313] 00:31:49.115 bw ( KiB/s): min= 1795, max= 2080, per=4.13%, avg=1971.53, stdev=75.38, samples=19 00:31:49.115 iops : min= 448, max= 520, avg=492.84, stdev=18.94, samples=19 00:31:49.115 lat (msec) : 20=0.58%, 50=98.89%, 100=0.52% 00:31:49.115 cpu : usr=97.66%, sys=1.28%, ctx=68, majf=0, minf=9 00:31:49.115 IO depths : 1=0.9%, 2=5.8%, 4=21.6%, 8=59.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:49.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 complete : 0=0.0%, 4=93.6%, 8=1.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.115 filename2: (groupid=0, jobs=1): err= 0: pid=687924: Tue Jul 16 00:08:02 2024 00:31:49.115 read: IOPS=495, BW=1982KiB/s (2029kB/s)(19.4MiB/10011msec) 00:31:49.115 slat (nsec): min=5581, max=97211, avg=15434.50, stdev=12098.78 00:31:49.115 clat (usec): min=13275, max=34744, avg=32165.59, stdev=1862.55 00:31:49.115 lat (usec): min=13283, max=34750, avg=32181.02, stdev=1862.59 00:31:49.115 clat percentiles (usec): 00:31:49.115 | 1.00th=[21890], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.115 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.115 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.115 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:31:49.115 | 99.99th=[34866] 00:31:49.115 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1977.60, stdev=65.33, samples=20 00:31:49.115 iops : min= 480, max= 512, avg=494.40, stdev=16.33, samples=20 00:31:49.115 lat (msec) : 20=0.77%, 50=99.23% 00:31:49.115 cpu : usr=99.24%, sys=0.48%, ctx=8, majf=0, minf=9 00:31:49.115 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:49.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.115 filename2: (groupid=0, jobs=1): err= 0: pid=687925: Tue Jul 16 00:08:02 2024 00:31:49.115 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10013msec) 00:31:49.115 slat (nsec): min=5587, max=96736, avg=17540.19, stdev=13862.28 00:31:49.115 clat (usec): min=11817, max=34489, avg=31946.67, stdev=2241.80 00:31:49.115 lat (usec): min=11825, max=34496, avg=31964.21, stdev=2241.74 00:31:49.115 clat percentiles (usec): 00:31:49.115 | 1.00th=[20317], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:49.115 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.115 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.115 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:31:49.115 | 99.99th=[34341] 00:31:49.115 bw ( KiB/s): min= 1920, max= 2176, per=4.17%, avg=1990.40, stdev=77.42, samples=20 00:31:49.115 iops : min= 480, max= 544, avg=497.60, stdev=19.35, samples=20 00:31:49.115 lat (msec) : 20=0.76%, 50=99.24% 00:31:49.115 cpu : usr=97.09%, sys=1.52%, ctx=119, majf=0, minf=9 00:31:49.115 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:49.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.115 filename2: (groupid=0, jobs=1): err= 0: pid=687926: Tue Jul 16 00:08:02 2024 00:31:49.115 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.2MiB/10005msec) 00:31:49.115 slat (nsec): min=5596, max=93773, avg=17842.57, stdev=12694.11 00:31:49.115 clat (usec): min=11440, max=61338, avg=32316.28, stdev=1982.76 00:31:49.115 lat (usec): min=11457, max=61353, avg=32334.12, stdev=1982.20 00:31:49.115 clat percentiles (usec): 00:31:49.115 | 1.00th=[28181], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:31:49.115 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:49.115 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:49.115 | 99.00th=[34341], 99.50th=[38011], 99.90th=[52167], 99.95th=[61080], 00:31:49.115 | 99.99th=[61080] 00:31:49.115 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1964.95, stdev=74.79, samples=20 00:31:49.115 iops : min= 448, max= 512, avg=491.20, stdev=18.79, samples=20 00:31:49.115 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:31:49.115 cpu : usr=99.12%, sys=0.59%, ctx=10, majf=0, minf=9 00:31:49.115 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:49.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.115 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.115 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:49.115 00:31:49.115 Run status group 0 (all jobs): 00:31:49.116 READ: bw=46.6MiB/s (48.8MB/s), 1935KiB/s-2219KiB/s (1982kB/s-2272kB/s), io=468MiB (491MB), run=10002-10046msec 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:49.116 00:08:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 bdev_null0 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 [2024-07-16 00:08:03.043661] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 bdev_null1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:49.116 { 00:31:49.116 "params": { 00:31:49.116 "name": "Nvme$subsystem", 00:31:49.116 "trtype": "$TEST_TRANSPORT", 00:31:49.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:49.116 "adrfam": "ipv4", 00:31:49.116 "trsvcid": "$NVMF_PORT", 00:31:49.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:49.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:49.116 "hdgst": ${hdgst:-false}, 00:31:49.116 "ddgst": ${ddgst:-false} 00:31:49.116 }, 00:31:49.116 "method": "bdev_nvme_attach_controller" 00:31:49.116 } 00:31:49.116 EOF 00:31:49.116 )") 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:49.116 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:49.117 { 00:31:49.117 "params": { 00:31:49.117 "name": "Nvme$subsystem", 00:31:49.117 "trtype": "$TEST_TRANSPORT", 00:31:49.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:49.117 "adrfam": "ipv4", 00:31:49.117 "trsvcid": "$NVMF_PORT", 00:31:49.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:49.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:49.117 "hdgst": ${hdgst:-false}, 00:31:49.117 "ddgst": ${ddgst:-false} 00:31:49.117 }, 00:31:49.117 "method": "bdev_nvme_attach_controller" 00:31:49.117 } 00:31:49.117 EOF 00:31:49.117 )") 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:49.117 "params": { 00:31:49.117 "name": "Nvme0", 00:31:49.117 "trtype": "tcp", 00:31:49.117 "traddr": "10.0.0.2", 00:31:49.117 "adrfam": "ipv4", 00:31:49.117 "trsvcid": "4420", 00:31:49.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:49.117 "hdgst": false, 00:31:49.117 "ddgst": false 00:31:49.117 }, 00:31:49.117 "method": "bdev_nvme_attach_controller" 00:31:49.117 },{ 00:31:49.117 "params": { 00:31:49.117 "name": "Nvme1", 00:31:49.117 "trtype": "tcp", 00:31:49.117 "traddr": "10.0.0.2", 00:31:49.117 "adrfam": "ipv4", 00:31:49.117 "trsvcid": "4420", 00:31:49.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:49.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:49.117 "hdgst": false, 00:31:49.117 "ddgst": false 00:31:49.117 }, 00:31:49.117 "method": "bdev_nvme_attach_controller" 00:31:49.117 }' 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:49.117 00:08:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.117 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:49.117 ... 00:31:49.117 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:49.117 ... 00:31:49.117 fio-3.35 00:31:49.117 Starting 4 threads 00:31:54.406 00:31:54.406 filename0: (groupid=0, jobs=1): err= 0: pid=690420: Tue Jul 16 00:08:09 2024 00:31:54.406 read: IOPS=2072, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5002msec) 00:31:54.406 slat (nsec): min=7876, max=37561, avg=8793.24, stdev=2602.96 00:31:54.406 clat (usec): min=2076, max=45909, avg=3835.92, stdev=1322.48 00:31:54.406 lat (usec): min=2084, max=45945, avg=3844.71, stdev=1322.63 00:31:54.406 clat percentiles (usec): 00:31:54.406 | 1.00th=[ 2606], 5.00th=[ 2999], 10.00th=[ 3163], 20.00th=[ 3392], 00:31:54.406 | 30.00th=[ 3490], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:31:54.406 | 70.00th=[ 3916], 80.00th=[ 4146], 90.00th=[ 4686], 95.00th=[ 5145], 00:31:54.406 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6390], 99.95th=[45876], 00:31:54.406 | 99.99th=[45876] 00:31:54.406 bw ( KiB/s): min=14976, max=17376, per=24.60%, avg=16565.33, stdev=673.66, samples=9 00:31:54.406 iops : min= 1872, max= 2172, avg=2070.67, stdev=84.21, samples=9 00:31:54.406 lat (msec) : 4=73.55%, 10=26.37%, 50=0.08% 00:31:54.406 cpu : usr=97.06%, sys=2.66%, ctx=13, majf=0, minf=1 00:31:54.406 IO depths : 1=0.2%, 2=1.5%, 4=69.6%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.406 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.406 issued rwts: total=10367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:54.406 filename0: (groupid=0, jobs=1): err= 0: pid=690421: Tue Jul 16 00:08:09 2024 00:31:54.406 read: IOPS=2083, BW=16.3MiB/s (17.1MB/s)(81.4MiB/5002msec) 00:31:54.406 slat (nsec): min=5417, max=36432, avg=8365.93, stdev=3042.75 00:31:54.406 clat (usec): min=1098, max=7021, avg=3816.01, stdev=743.72 00:31:54.406 lat (usec): min=1106, max=7030, avg=3824.38, stdev=743.40 00:31:54.406 clat percentiles (usec): 00:31:54.406 | 1.00th=[ 1483], 5.00th=[ 3032], 10.00th=[ 3195], 20.00th=[ 3392], 00:31:54.406 | 30.00th=[ 3490], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:31:54.406 | 70.00th=[ 3851], 80.00th=[ 4113], 90.00th=[ 4948], 95.00th=[ 5407], 00:31:54.406 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 6521], 99.95th=[ 6783], 00:31:54.406 | 99.99th=[ 7046] 00:31:54.406 bw ( KiB/s): min=16192, max=18352, per=24.75%, avg=16670.40, stdev=623.09, samples=10 00:31:54.406 iops : min= 2024, max= 2294, avg=2083.80, stdev=77.89, samples=10 00:31:54.406 lat (msec) : 2=1.84%, 4=73.57%, 10=24.59% 00:31:54.406 cpu : usr=97.10%, sys=2.64%, ctx=10, majf=0, minf=9 00:31:54.406 IO depths : 1=0.2%, 2=0.8%, 4=71.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.406 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.406 issued rwts: total=10424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:54.406 filename1: (groupid=0, jobs=1): err= 0: pid=690422: Tue Jul 16 00:08:09 2024 00:31:54.406 read: IOPS=2215, BW=17.3MiB/s (18.1MB/s)(86.6MiB/5002msec) 00:31:54.406 slat (nsec): min=5409, max=31744, avg=7727.91, stdev=1883.66 00:31:54.406 clat (usec): min=1214, max=45168, avg=3589.97, stdev=1280.69 00:31:54.406 lat (usec): min=1223, max=45193, avg=3597.70, stdev=1280.68 00:31:54.406 clat percentiles (usec): 00:31:54.406 | 1.00th=[ 2245], 5.00th=[ 2606], 10.00th=[ 2835], 20.00th=[ 3064], 00:31:54.406 | 30.00th=[ 3228], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3687], 00:31:54.406 | 70.00th=[ 3752], 80.00th=[ 3982], 90.00th=[ 4424], 95.00th=[ 4686], 00:31:54.406 | 99.00th=[ 5473], 99.50th=[ 5735], 99.90th=[ 6325], 99.95th=[45351], 00:31:54.406 | 99.99th=[45351] 00:31:54.406 bw ( KiB/s): min=16544, max=19168, per=26.31%, avg=17718.50, stdev=775.52, samples=10 00:31:54.406 iops : min= 2068, max= 2396, avg=2214.80, stdev=96.96, samples=10 00:31:54.406 lat (msec) : 2=0.34%, 4=79.95%, 10=19.63%, 50=0.07% 00:31:54.406 cpu : usr=97.54%, sys=2.14%, ctx=13, majf=0, minf=11 00:31:54.406 IO depths : 1=0.1%, 2=3.2%, 4=67.1%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.406 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.406 issued rwts: total=11080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:54.406 filename1: (groupid=0, jobs=1): err= 0: pid=690423: Tue Jul 16 00:08:09 2024 00:31:54.406 read: IOPS=2046, BW=16.0MiB/s (16.8MB/s)(80.0MiB/5002msec) 00:31:54.406 slat (nsec): min=5407, max=36763, avg=8200.69, stdev=2990.24 00:31:54.406 clat (usec): min=1957, max=45170, avg=3886.59, stdev=1324.80 00:31:54.406 lat (usec): min=1965, max=45206, avg=3894.79, stdev=1325.01 00:31:54.406 clat percentiles (usec): 00:31:54.406 | 1.00th=[ 2769], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3425], 00:31:54.406 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3752], 00:31:54.406 | 70.00th=[ 3916], 80.00th=[ 4146], 90.00th=[ 4883], 95.00th=[ 5407], 00:31:54.406 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 6849], 99.95th=[45351], 00:31:54.406 | 99.99th=[45351] 00:31:54.406 bw ( KiB/s): min=15008, max=17152, per=24.30%, avg=16364.80, stdev=580.81, samples=10 00:31:54.406 iops : min= 1876, max= 2144, avg=2045.60, stdev=72.60, samples=10 00:31:54.406 lat (msec) : 2=0.02%, 4=73.82%, 10=26.08%, 50=0.08% 00:31:54.406 cpu : usr=96.76%, sys=2.96%, ctx=9, majf=0, minf=0 00:31:54.406 IO depths : 1=0.2%, 2=0.7%, 4=71.6%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:54.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.406 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:54.406 issued rwts: total=10236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:54.406 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:54.406 00:31:54.406 Run status group 0 (all jobs): 00:31:54.406 READ: bw=65.8MiB/s (69.0MB/s), 16.0MiB/s-17.3MiB/s (16.8MB/s-18.1MB/s), io=329MiB (345MB), run=5002-5002msec 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:54.406 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:54.407 00:31:54.407 real 0m24.632s 00:31:54.407 user 5m16.024s 00:31:54.407 sys 0m3.936s 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1118 -- # xtrace_disable 00:31:54.407 00:08:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:54.407 ************************************ 00:31:54.407 END TEST fio_dif_rand_params 00:31:54.407 ************************************ 00:31:54.669 00:08:09 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:31:54.669 00:08:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:54.669 00:08:09 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:31:54.669 00:08:09 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:31:54.669 00:08:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:54.669 ************************************ 00:31:54.669 START TEST fio_dif_digest 00:31:54.669 ************************************ 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1117 -- # fio_dif_digest 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:54.669 bdev_null0 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:54.669 [2024-07-16 00:08:09.707552] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:54.669 { 00:31:54.669 "params": { 00:31:54.669 "name": "Nvme$subsystem", 00:31:54.669 "trtype": "$TEST_TRANSPORT", 00:31:54.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.669 "adrfam": "ipv4", 00:31:54.669 "trsvcid": "$NVMF_PORT", 00:31:54.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.669 "hdgst": ${hdgst:-false}, 00:31:54.669 "ddgst": ${ddgst:-false} 00:31:54.669 }, 00:31:54.669 "method": "bdev_nvme_attach_controller" 00:31:54.669 } 00:31:54.669 EOF 00:31:54.669 )") 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local sanitizers 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # shift 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local asan_lib= 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # grep libasan 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:54.669 "params": { 00:31:54.669 "name": "Nvme0", 00:31:54.669 "trtype": "tcp", 00:31:54.669 "traddr": "10.0.0.2", 00:31:54.669 "adrfam": "ipv4", 00:31:54.669 "trsvcid": "4420", 00:31:54.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:54.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:54.669 "hdgst": true, 00:31:54.669 "ddgst": true 00:31:54.669 }, 00:31:54.669 "method": "bdev_nvme_attach_controller" 00:31:54.669 }' 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # asan_lib= 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:54.669 00:08:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:55.240 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:55.240 ... 00:31:55.240 fio-3.35 00:31:55.240 Starting 3 threads 00:32:07.544 00:32:07.544 filename0: (groupid=0, jobs=1): err= 0: pid=691625: Tue Jul 16 00:08:20 2024 00:32:07.544 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(294MiB/10048msec) 00:32:07.544 slat (nsec): min=5657, max=32361, avg=6457.31, stdev=898.70 00:32:07.544 clat (usec): min=8265, max=56538, avg=12811.07, stdev=3520.42 00:32:07.544 lat (usec): min=8271, max=56545, avg=12817.52, stdev=3520.42 00:32:07.544 clat percentiles (usec): 00:32:07.544 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11469], 00:32:07.544 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:32:07.544 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14353], 95.00th=[14746], 00:32:07.544 | 99.00th=[16188], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:32:07.544 | 99.99th=[56361] 00:32:07.544 bw ( KiB/s): min=26880, max=33536, per=37.43%, avg=30028.80, stdev=1504.46, samples=20 00:32:07.544 iops : min= 210, max= 262, avg=234.60, stdev=11.75, samples=20 00:32:07.544 lat (msec) : 10=7.67%, 20=91.74%, 50=0.04%, 100=0.55% 00:32:07.544 cpu : usr=96.03%, sys=3.68%, ctx=17, majf=0, minf=164 00:32:07.544 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.544 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.544 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:07.544 filename0: (groupid=0, jobs=1): err= 0: pid=691626: Tue Jul 16 00:08:20 2024 00:32:07.544 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(269MiB/10047msec) 00:32:07.544 slat (nsec): min=5678, max=31678, avg=6455.76, stdev=807.61 00:32:07.544 clat (usec): min=8610, max=95240, avg=13969.86, stdev=4181.01 00:32:07.544 lat (usec): min=8617, max=95247, avg=13976.32, stdev=4181.06 00:32:07.544 clat percentiles (usec): 00:32:07.544 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[11076], 20.00th=[12518], 00:32:07.544 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14222], 00:32:07.544 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:32:07.544 | 99.00th=[17957], 99.50th=[54789], 99.90th=[56886], 99.95th=[57934], 00:32:07.544 | 99.99th=[94897] 00:32:07.544 bw ( KiB/s): min=23296, max=29440, per=34.32%, avg=27532.80, stdev=1430.12, samples=20 00:32:07.544 iops : min= 182, max= 230, avg=215.10, stdev=11.17, samples=20 00:32:07.544 lat (msec) : 10=2.93%, 20=96.19%, 50=0.19%, 100=0.70% 00:32:07.544 cpu : usr=95.80%, sys=3.95%, ctx=23, majf=0, minf=126 00:32:07.544 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.544 issued rwts: total=2153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.544 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:07.544 filename0: (groupid=0, jobs=1): err= 0: pid=691628: Tue Jul 16 00:08:20 2024 00:32:07.544 read: IOPS=179, BW=22.5MiB/s (23.5MB/s)(225MiB/10005msec) 00:32:07.544 slat (nsec): min=5671, max=33388, avg=6496.47, stdev=1023.39 00:32:07.544 clat (usec): min=8138, max=94975, avg=16692.95, stdev=9237.02 00:32:07.544 lat (usec): min=8145, max=94982, avg=16699.45, stdev=9237.01 00:32:07.544 clat percentiles (usec): 00:32:07.544 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[12911], 20.00th=[13698], 00:32:07.544 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[15139], 00:32:07.544 | 70.00th=[15533], 80.00th=[16057], 90.00th=[16909], 95.00th=[19268], 00:32:07.544 | 99.00th=[56886], 99.50th=[57410], 99.90th=[94897], 99.95th=[94897], 00:32:07.544 | 99.99th=[94897] 00:32:07.544 bw ( KiB/s): min=18688, max=26624, per=29.17%, avg=23401.74, stdev=2151.73, samples=19 00:32:07.544 iops : min= 146, max= 208, avg=182.79, stdev=16.89, samples=19 00:32:07.544 lat (msec) : 10=1.17%, 20=93.93%, 100=4.90% 00:32:07.544 cpu : usr=96.43%, sys=3.28%, ctx=17, majf=0, minf=124 00:32:07.544 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.544 issued rwts: total=1797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.545 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:07.545 00:32:07.545 Run status group 0 (all jobs): 00:32:07.545 READ: bw=78.3MiB/s (82.2MB/s), 22.5MiB/s-29.2MiB/s (23.5MB/s-30.6MB/s), io=787MiB (825MB), run=10005-10048msec 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:32:07.545 00:32:07.545 real 0m11.125s 00:32:07.545 user 0m42.613s 00:32:07.545 sys 0m1.414s 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1118 -- # xtrace_disable 00:32:07.545 00:08:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:07.545 ************************************ 00:32:07.545 END TEST fio_dif_digest 00:32:07.545 ************************************ 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:32:07.545 00:08:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:07.545 00:08:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:07.545 rmmod nvme_tcp 00:32:07.545 rmmod nvme_fabrics 00:32:07.545 rmmod nvme_keyring 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 681153 ']' 00:32:07.545 00:08:20 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 681153 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@942 -- # '[' -z 681153 ']' 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@946 -- # kill -0 681153 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@947 -- # uname 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 681153 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@960 -- # echo 'killing process with pid 681153' 00:32:07.545 killing process with pid 681153 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@961 -- # kill 681153 00:32:07.545 00:08:20 nvmf_dif -- common/autotest_common.sh@966 -- # wait 681153 00:32:07.545 00:08:21 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:07.545 00:08:21 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:09.457 Waiting for block devices as requested 00:32:09.457 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:09.457 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:09.457 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:09.457 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:09.457 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:09.457 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:09.718 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:09.718 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:09.718 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:09.980 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:09.980 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:09.980 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:10.241 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:10.241 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:10.241 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:10.241 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:10.501 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:10.501 00:08:25 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:10.501 00:08:25 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:10.501 00:08:25 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:10.501 00:08:25 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:10.501 00:08:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.501 00:08:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:10.501 00:08:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.415 00:08:27 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:12.415 00:32:12.415 real 1m18.833s 00:32:12.415 user 8m2.195s 00:32:12.415 sys 0m20.364s 00:32:12.415 00:08:27 nvmf_dif -- common/autotest_common.sh@1118 -- # xtrace_disable 00:32:12.415 00:08:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:12.416 ************************************ 00:32:12.416 END TEST nvmf_dif 00:32:12.416 ************************************ 00:32:12.677 00:08:27 -- common/autotest_common.sh@1136 -- # return 0 00:32:12.677 00:08:27 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:12.677 00:08:27 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:32:12.677 00:08:27 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:32:12.677 00:08:27 -- common/autotest_common.sh@10 -- # set +x 00:32:12.677 ************************************ 00:32:12.677 START TEST nvmf_abort_qd_sizes 00:32:12.677 ************************************ 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:12.677 * Looking for test storage... 00:32:12.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:12.677 00:08:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:20.816 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:20.816 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:20.816 Found net devices under 0000:31:00.0: cvl_0_0 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:20.816 Found net devices under 0000:31:00.1: cvl_0_1 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:20.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:32:20.816 00:32:20.816 --- 10.0.0.2 ping statistics --- 00:32:20.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.816 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:32:20.816 00:32:20.816 --- 10.0.0.1 ping statistics --- 00:32:20.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.816 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:20.816 00:08:35 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:25.020 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:25.020 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:25.020 00:08:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:25.020 00:08:40 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=701960 00:32:25.020 00:08:40 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 701960 00:32:25.020 00:08:40 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:25.020 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@823 -- # '[' -z 701960 ']' 00:32:25.020 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.020 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # local max_retries=100 00:32:25.020 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.020 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # xtrace_disable 00:32:25.020 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:25.020 [2024-07-16 00:08:40.063010] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:32:25.020 [2024-07-16 00:08:40.063074] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.020 [2024-07-16 00:08:40.144655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:25.280 [2024-07-16 00:08:40.212257] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.280 [2024-07-16 00:08:40.212292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.280 [2024-07-16 00:08:40.212300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.280 [2024-07-16 00:08:40.212307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.280 [2024-07-16 00:08:40.212312] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.280 [2024-07-16 00:08:40.212446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.280 [2024-07-16 00:08:40.212623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:25.280 [2024-07-16 00:08:40.212781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.280 [2024-07-16 00:08:40.212781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # return 0 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # xtrace_disable 00:32:25.852 00:08:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:25.852 ************************************ 00:32:25.852 START TEST spdk_target_abort 00:32:25.852 ************************************ 00:32:25.852 00:08:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1117 -- # spdk_target 00:32:25.852 00:08:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:25.852 00:08:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:25.852 00:08:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:32:25.852 00:08:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:26.114 spdk_targetn1 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:26.114 [2024-07-16 00:08:41.245288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:26.114 [2024-07-16 00:08:41.285557] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:26.114 00:08:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:26.374 [2024-07-16 00:08:41.395261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:104 len:8 PRP1 0x2000078be000 PRP2 0x0 00:32:26.374 [2024-07-16 00:08:41.395287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:32:26.374 [2024-07-16 00:08:41.396239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:168 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:32:26.374 [2024-07-16 00:08:41.396255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0016 p:1 m:0 dnr:0 00:32:26.374 [2024-07-16 00:08:41.397146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:224 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:26.375 [2024-07-16 00:08:41.397160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:001f p:1 m:0 dnr:0 00:32:26.375 [2024-07-16 00:08:41.401680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:248 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:32:26.375 [2024-07-16 00:08:41.401693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0023 p:1 m:0 dnr:0 00:32:26.375 [2024-07-16 00:08:41.409675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:504 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:32:26.375 [2024-07-16 00:08:41.409691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0043 p:1 m:0 dnr:0 00:32:26.375 [2024-07-16 00:08:41.458378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2112 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:32:26.375 [2024-07-16 00:08:41.458399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:26.375 [2024-07-16 00:08:41.458682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2136 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:32:26.375 [2024-07-16 00:08:41.458693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:26.375 [2024-07-16 00:08:41.459376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2176 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:32:26.375 [2024-07-16 00:08:41.459389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:26.375 [2024-07-16 00:08:41.474459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2656 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:26.375 [2024-07-16 00:08:41.474476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:29.671 Initializing NVMe Controllers 00:32:29.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:29.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:29.671 Initialization complete. Launching workers. 00:32:29.671 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11321, failed: 9 00:32:29.671 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3388, failed to submit 7942 00:32:29.671 success 718, unsuccess 2670, failed 0 00:32:29.671 00:08:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:29.671 00:08:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:29.671 [2024-07-16 00:08:44.600559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:528 len:8 PRP1 0x200007c56000 PRP2 0x0 00:32:29.671 [2024-07-16 00:08:44.600600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:32:29.671 [2024-07-16 00:08:44.686345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:2456 len:8 PRP1 0x200007c54000 PRP2 0x0 00:32:29.671 [2024-07-16 00:08:44.686371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:29.671 [2024-07-16 00:08:44.694348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:2616 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:32:29.671 [2024-07-16 00:08:44.694369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:32.973 Initializing NVMe Controllers 00:32:32.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:32.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:32.973 Initialization complete. Launching workers. 00:32:32.973 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8600, failed: 3 00:32:32.973 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1208, failed to submit 7395 00:32:32.973 success 350, unsuccess 858, failed 0 00:32:32.973 00:08:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:32.974 00:08:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:33.918 [2024-07-16 00:08:48.997309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:149 nsid:1 lba:123800 len:8 PRP1 0x200007902000 PRP2 0x0 00:32:33.918 [2024-07-16 00:08:48.997345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:149 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:32:35.828 Initializing NVMe Controllers 00:32:35.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:35.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:35.828 Initialization complete. Launching workers. 00:32:35.828 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42126, failed: 1 00:32:35.828 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2564, failed to submit 39563 00:32:35.828 success 599, unsuccess 1965, failed 0 00:32:35.828 00:08:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:35.828 00:08:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:32:35.828 00:08:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:35.828 00:08:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:32:35.828 00:08:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:35.828 00:08:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:32:35.828 00:08:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 701960 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@942 -- # '[' -z 701960 ']' 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # kill -0 701960 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # uname 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 701960 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # echo 'killing process with pid 701960' 00:32:37.741 killing process with pid 701960 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@961 -- # kill 701960 00:32:37.741 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # wait 701960 00:32:38.001 00:32:38.001 real 0m12.030s 00:32:38.001 user 0m48.845s 00:32:38.001 sys 0m1.866s 00:32:38.001 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:32:38.001 00:08:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:38.001 ************************************ 00:32:38.001 END TEST spdk_target_abort 00:32:38.001 ************************************ 00:32:38.001 00:08:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1136 -- # return 0 00:32:38.001 00:08:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:38.001 00:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:32:38.001 00:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # xtrace_disable 00:32:38.001 00:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:38.001 ************************************ 00:32:38.001 START TEST kernel_target_abort 00:32:38.001 ************************************ 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1117 -- # kernel_target 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:38.001 00:08:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:42.206 Waiting for block devices as requested 00:32:42.206 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:42.206 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:42.206 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:42.206 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:42.206 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:42.206 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:42.206 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:42.468 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:42.468 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:42.729 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:42.729 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:42.729 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:42.729 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:42.990 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:42.990 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:42.990 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:42.990 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:43.250 No valid GPT data, bailing 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:43.250 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:32:43.251 00:32:43.251 Discovery Log Number of Records 2, Generation counter 2 00:32:43.251 =====Discovery Log Entry 0====== 00:32:43.251 trtype: tcp 00:32:43.251 adrfam: ipv4 00:32:43.251 subtype: current discovery subsystem 00:32:43.251 treq: not specified, sq flow control disable supported 00:32:43.251 portid: 1 00:32:43.251 trsvcid: 4420 00:32:43.251 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:43.251 traddr: 10.0.0.1 00:32:43.251 eflags: none 00:32:43.251 sectype: none 00:32:43.251 =====Discovery Log Entry 1====== 00:32:43.251 trtype: tcp 00:32:43.251 adrfam: ipv4 00:32:43.251 subtype: nvme subsystem 00:32:43.251 treq: not specified, sq flow control disable supported 00:32:43.251 portid: 1 00:32:43.251 trsvcid: 4420 00:32:43.251 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:43.251 traddr: 10.0.0.1 00:32:43.251 eflags: none 00:32:43.251 sectype: none 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:43.251 00:08:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:46.570 Initializing NVMe Controllers 00:32:46.570 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:46.570 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:46.570 Initialization complete. Launching workers. 00:32:46.570 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56257, failed: 0 00:32:46.570 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 56257, failed to submit 0 00:32:46.570 success 0, unsuccess 56257, failed 0 00:32:46.570 00:09:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:46.570 00:09:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:49.948 Initializing NVMe Controllers 00:32:49.948 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:49.948 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:49.948 Initialization complete. Launching workers. 00:32:49.948 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98465, failed: 0 00:32:49.948 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24826, failed to submit 73639 00:32:49.948 success 0, unsuccess 24826, failed 0 00:32:49.948 00:09:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:49.948 00:09:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:52.492 Initializing NVMe Controllers 00:32:52.492 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:52.492 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:52.493 Initialization complete. Launching workers. 00:32:52.493 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93947, failed: 0 00:32:52.493 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23470, failed to submit 70477 00:32:52.493 success 0, unsuccess 23470, failed 0 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:52.493 00:09:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:56.698 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:56.698 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:58.081 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:58.341 00:32:58.341 real 0m20.301s 00:32:58.341 user 0m9.128s 00:32:58.341 sys 0m6.433s 00:32:58.341 00:09:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:32:58.341 00:09:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:58.341 ************************************ 00:32:58.341 END TEST kernel_target_abort 00:32:58.341 ************************************ 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1136 -- # return 0 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:58.341 rmmod nvme_tcp 00:32:58.341 rmmod nvme_fabrics 00:32:58.341 rmmod nvme_keyring 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:58.341 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:58.342 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:58.342 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 701960 ']' 00:32:58.342 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 701960 00:32:58.342 00:09:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@942 -- # '[' -z 701960 ']' 00:32:58.342 00:09:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # kill -0 701960 00:32:58.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (701960) - No such process 00:32:58.342 00:09:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@969 -- # echo 'Process with pid 701960 is not found' 00:32:58.342 Process with pid 701960 is not found 00:32:58.342 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:58.342 00:09:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:02.547 Waiting for block devices as requested 00:33:02.547 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:02.547 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:02.547 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:02.547 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:02.547 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:02.547 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:02.547 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:02.808 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:02.808 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:03.069 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:03.069 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:03.069 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:03.069 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:03.330 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:03.330 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:03.330 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:03.590 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:03.590 00:09:18 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:03.590 00:09:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:03.590 00:09:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:03.590 00:09:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:03.590 00:09:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.590 00:09:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:03.590 00:09:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.500 00:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:05.501 00:33:05.501 real 0m52.961s 00:33:05.501 user 1m3.770s 00:33:05.501 sys 0m19.743s 00:33:05.501 00:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1118 -- # xtrace_disable 00:33:05.501 00:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:05.501 ************************************ 00:33:05.501 END TEST nvmf_abort_qd_sizes 00:33:05.501 ************************************ 00:33:05.501 00:09:20 -- common/autotest_common.sh@1136 -- # return 0 00:33:05.501 00:09:20 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:05.501 00:09:20 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:33:05.501 00:09:20 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:33:05.501 00:09:20 -- common/autotest_common.sh@10 -- # set +x 00:33:05.762 ************************************ 00:33:05.762 START TEST keyring_file 00:33:05.762 ************************************ 00:33:05.762 00:09:20 keyring_file -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:05.762 * Looking for test storage... 00:33:05.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:05.762 00:09:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:05.762 00:09:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.762 00:09:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.763 00:09:20 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.763 00:09:20 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.763 00:09:20 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.763 00:09:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.763 00:09:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.763 00:09:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.763 00:09:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:05.763 00:09:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:05.763 00:09:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:05.763 00:09:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:05.763 00:09:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:05.763 00:09:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:05.763 00:09:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:05.763 00:09:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.aCTisyOdaz 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.aCTisyOdaz 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.aCTisyOdaz 00:33:05.763 00:09:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.aCTisyOdaz 00:33:05.763 00:09:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.c3vlWNKXX7 00:33:05.763 00:09:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:05.763 00:09:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:06.025 00:09:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.c3vlWNKXX7 00:33:06.025 00:09:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.c3vlWNKXX7 00:33:06.025 00:09:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.c3vlWNKXX7 00:33:06.025 00:09:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=712934 00:33:06.025 00:09:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 712934 00:33:06.025 00:09:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:06.025 00:09:20 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 712934 ']' 00:33:06.025 00:09:20 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.025 00:09:20 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:33:06.025 00:09:20 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.025 00:09:20 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:33:06.025 00:09:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:06.025 [2024-07-16 00:09:21.032621] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:33:06.025 [2024-07-16 00:09:21.032697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid712934 ] 00:33:06.025 [2024-07-16 00:09:21.105690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.025 [2024-07-16 00:09:21.172791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:33:06.967 00:09:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@553 -- # xtrace_disable 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:06.967 [2024-07-16 00:09:21.821267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.967 null0 00:33:06.967 [2024-07-16 00:09:21.853311] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:06.967 [2024-07-16 00:09:21.853529] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:06.967 [2024-07-16 00:09:21.861317] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:33:06.967 00:09:21 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@645 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@553 -- # xtrace_disable 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:06.967 [2024-07-16 00:09:21.877355] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:06.967 request: 00:33:06.967 { 00:33:06.967 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:06.967 "secure_channel": false, 00:33:06.967 "listen_address": { 00:33:06.967 "trtype": "tcp", 00:33:06.967 "traddr": "127.0.0.1", 00:33:06.967 "trsvcid": "4420" 00:33:06.967 }, 00:33:06.967 "method": "nvmf_subsystem_add_listener", 00:33:06.967 "req_id": 1 00:33:06.967 } 00:33:06.967 Got JSON-RPC error response 00:33:06.967 response: 00:33:06.967 { 00:33:06.967 "code": -32602, 00:33:06.967 "message": "Invalid parameters" 00:33:06.967 } 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:33:06.967 00:09:21 keyring_file -- keyring/file.sh@46 -- # bperfpid=713091 00:33:06.967 00:09:21 keyring_file -- keyring/file.sh@48 -- # waitforlisten 713091 /var/tmp/bperf.sock 00:33:06.967 00:09:21 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 713091 ']' 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:06.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:33:06.967 00:09:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:06.967 [2024-07-16 00:09:21.945415] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:33:06.967 [2024-07-16 00:09:21.945473] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713091 ] 00:33:06.967 [2024-07-16 00:09:22.028565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.967 [2024-07-16 00:09:22.092892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.537 00:09:22 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:33:07.537 00:09:22 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:33:07.537 00:09:22 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aCTisyOdaz 00:33:07.537 00:09:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aCTisyOdaz 00:33:07.797 00:09:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.c3vlWNKXX7 00:33:07.797 00:09:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.c3vlWNKXX7 00:33:08.058 00:09:22 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:08.058 00:09:22 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:08.058 00:09:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:08.058 00:09:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.058 00:09:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:08.058 00:09:23 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.aCTisyOdaz == \/\t\m\p\/\t\m\p\.\a\C\T\i\s\y\O\d\a\z ]] 00:33:08.058 00:09:23 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:08.058 00:09:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:08.058 00:09:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:08.058 00:09:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.058 00:09:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:08.318 00:09:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.c3vlWNKXX7 == \/\t\m\p\/\t\m\p\.\c\3\v\l\W\N\K\X\X\7 ]] 00:33:08.318 00:09:23 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.318 00:09:23 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:08.318 00:09:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.318 00:09:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:08.579 00:09:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:08.579 00:09:23 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:08.579 00:09:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:08.838 [2024-07-16 00:09:23.781427] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:08.838 nvme0n1 00:33:08.838 00:09:23 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:08.838 00:09:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:08.838 00:09:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:08.838 00:09:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:08.838 00:09:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:08.838 00:09:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:09.100 00:09:24 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:09.100 00:09:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:09.100 00:09:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:09.100 00:09:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:09.100 00:09:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:09.100 00:09:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:09.100 00:09:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:09.100 00:09:24 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:09.100 00:09:24 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:09.100 Running I/O for 1 seconds... 00:33:10.482 00:33:10.482 Latency(us) 00:33:10.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.482 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:10.482 nvme0n1 : 1.01 10816.84 42.25 0.00 0.00 11779.79 7263.57 21189.97 00:33:10.482 =================================================================================================================== 00:33:10.482 Total : 10816.84 42.25 0.00 0.00 11779.79 7263.57 21189.97 00:33:10.482 0 00:33:10.482 00:09:25 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:10.482 00:09:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:10.482 00:09:25 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:10.482 00:09:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:10.482 00:09:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:10.482 00:09:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.482 00:09:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.482 00:09:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:10.482 00:09:25 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:10.483 00:09:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:10.483 00:09:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:10.483 00:09:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:10.483 00:09:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.483 00:09:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:10.483 00:09:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.744 00:09:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:10.744 00:09:25 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:10.744 00:09:25 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:33:10.744 00:09:25 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:10.744 00:09:25 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:33:10.744 00:09:25 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:10.744 00:09:25 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:33:10.744 00:09:25 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:10.744 00:09:25 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:10.744 00:09:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:10.744 [2024-07-16 00:09:25.929423] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:10.744 [2024-07-16 00:09:25.929528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c94450 (107): Transport endpoint is not connected 00:33:10.744 [2024-07-16 00:09:25.930524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c94450 (9): Bad file descriptor 00:33:10.744 [2024-07-16 00:09:25.931532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:10.744 [2024-07-16 00:09:25.931540] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:10.744 [2024-07-16 00:09:25.931547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:11.004 request: 00:33:11.004 { 00:33:11.004 "name": "nvme0", 00:33:11.004 "trtype": "tcp", 00:33:11.004 "traddr": "127.0.0.1", 00:33:11.004 "adrfam": "ipv4", 00:33:11.004 "trsvcid": "4420", 00:33:11.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:11.004 "prchk_reftag": false, 00:33:11.004 "prchk_guard": false, 00:33:11.004 "hdgst": false, 00:33:11.004 "ddgst": false, 00:33:11.004 "psk": "key1", 00:33:11.005 "method": "bdev_nvme_attach_controller", 00:33:11.005 "req_id": 1 00:33:11.005 } 00:33:11.005 Got JSON-RPC error response 00:33:11.005 response: 00:33:11.005 { 00:33:11.005 "code": -5, 00:33:11.005 "message": "Input/output error" 00:33:11.005 } 00:33:11.005 00:09:25 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:33:11.005 00:09:25 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:33:11.005 00:09:25 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:33:11.005 00:09:25 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:33:11.005 00:09:25 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:11.005 00:09:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:11.005 00:09:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.005 00:09:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:11.005 00:09:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.005 00:09:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.005 00:09:26 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:11.005 00:09:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:11.005 00:09:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:11.005 00:09:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.005 00:09:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.005 00:09:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:11.005 00:09:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.265 00:09:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:11.265 00:09:26 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:11.265 00:09:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:11.265 00:09:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:11.266 00:09:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:11.526 00:09:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:11.526 00:09:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.526 00:09:26 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:11.787 00:09:26 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:11.787 00:09:26 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.aCTisyOdaz 00:33:11.787 00:09:26 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.aCTisyOdaz 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.aCTisyOdaz 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aCTisyOdaz 00:33:11.787 00:09:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aCTisyOdaz 00:33:11.787 [2024-07-16 00:09:26.880257] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.aCTisyOdaz': 0100660 00:33:11.787 [2024-07-16 00:09:26.880276] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:11.787 request: 00:33:11.787 { 00:33:11.787 "name": "key0", 00:33:11.787 "path": "/tmp/tmp.aCTisyOdaz", 00:33:11.787 "method": "keyring_file_add_key", 00:33:11.787 "req_id": 1 00:33:11.787 } 00:33:11.787 Got JSON-RPC error response 00:33:11.787 response: 00:33:11.787 { 00:33:11.787 "code": -1, 00:33:11.787 "message": "Operation not permitted" 00:33:11.787 } 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:33:11.787 00:09:26 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:33:11.787 00:09:26 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.aCTisyOdaz 00:33:11.787 00:09:26 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.aCTisyOdaz 00:33:11.787 00:09:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.aCTisyOdaz 00:33:12.047 00:09:27 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.aCTisyOdaz 00:33:12.047 00:09:27 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:12.047 00:09:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:12.047 00:09:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:12.047 00:09:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:12.047 00:09:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:12.047 00:09:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:12.047 00:09:27 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:12.047 00:09:27 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:12.047 00:09:27 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:33:12.047 00:09:27 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:12.047 00:09:27 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:33:12.047 00:09:27 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:12.047 00:09:27 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:33:12.047 00:09:27 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:12.047 00:09:27 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:12.047 00:09:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:12.308 [2024-07-16 00:09:27.357473] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.aCTisyOdaz': No such file or directory 00:33:12.308 [2024-07-16 00:09:27.357490] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:12.308 [2024-07-16 00:09:27.357507] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:12.308 [2024-07-16 00:09:27.357512] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:12.308 [2024-07-16 00:09:27.357517] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:12.308 request: 00:33:12.308 { 00:33:12.308 "name": "nvme0", 00:33:12.308 "trtype": "tcp", 00:33:12.308 "traddr": "127.0.0.1", 00:33:12.308 "adrfam": "ipv4", 00:33:12.308 "trsvcid": "4420", 00:33:12.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:12.308 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:12.308 "prchk_reftag": false, 00:33:12.308 "prchk_guard": false, 00:33:12.308 "hdgst": false, 00:33:12.308 "ddgst": false, 00:33:12.308 "psk": "key0", 00:33:12.308 "method": "bdev_nvme_attach_controller", 00:33:12.308 "req_id": 1 00:33:12.308 } 00:33:12.308 Got JSON-RPC error response 00:33:12.309 response: 00:33:12.309 { 00:33:12.309 "code": -19, 00:33:12.309 "message": "No such device" 00:33:12.309 } 00:33:12.309 00:09:27 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:33:12.309 00:09:27 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:33:12.309 00:09:27 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:33:12.309 00:09:27 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:33:12.309 00:09:27 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:12.309 00:09:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:12.570 00:09:27 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qaS9SaC8Tq 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:12.570 00:09:27 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:12.570 00:09:27 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:12.570 00:09:27 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:12.570 00:09:27 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:12.570 00:09:27 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:12.570 00:09:27 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qaS9SaC8Tq 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qaS9SaC8Tq 00:33:12.570 00:09:27 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.qaS9SaC8Tq 00:33:12.570 00:09:27 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qaS9SaC8Tq 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qaS9SaC8Tq 00:33:12.570 00:09:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:12.570 00:09:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:12.832 nvme0n1 00:33:12.832 00:09:27 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:12.832 00:09:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:12.832 00:09:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:12.832 00:09:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:12.832 00:09:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:12.832 00:09:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:13.094 00:09:28 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:13.094 00:09:28 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:13.094 00:09:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:13.094 00:09:28 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:13.094 00:09:28 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:13.094 00:09:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.094 00:09:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.094 00:09:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:13.355 00:09:28 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:13.355 00:09:28 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:13.355 00:09:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:13.355 00:09:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.355 00:09:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.355 00:09:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:13.355 00:09:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.615 00:09:28 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:13.615 00:09:28 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:13.615 00:09:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:13.615 00:09:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:13.615 00:09:28 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:13.615 00:09:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.875 00:09:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:13.875 00:09:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qaS9SaC8Tq 00:33:13.875 00:09:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qaS9SaC8Tq 00:33:14.136 00:09:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.c3vlWNKXX7 00:33:14.136 00:09:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.c3vlWNKXX7 00:33:14.136 00:09:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:14.137 00:09:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:14.398 nvme0n1 00:33:14.398 00:09:29 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:14.398 00:09:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:14.660 00:09:29 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:14.660 "subsystems": [ 00:33:14.660 { 00:33:14.660 "subsystem": "keyring", 00:33:14.660 "config": [ 00:33:14.660 { 00:33:14.660 "method": "keyring_file_add_key", 00:33:14.660 "params": { 00:33:14.660 "name": "key0", 00:33:14.660 "path": "/tmp/tmp.qaS9SaC8Tq" 00:33:14.660 } 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "method": "keyring_file_add_key", 00:33:14.660 "params": { 00:33:14.660 "name": "key1", 00:33:14.660 "path": "/tmp/tmp.c3vlWNKXX7" 00:33:14.660 } 00:33:14.660 } 00:33:14.660 ] 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "subsystem": "iobuf", 00:33:14.660 "config": [ 00:33:14.660 { 00:33:14.660 "method": "iobuf_set_options", 00:33:14.660 "params": { 00:33:14.660 "small_pool_count": 8192, 00:33:14.660 "large_pool_count": 1024, 00:33:14.660 "small_bufsize": 8192, 00:33:14.660 "large_bufsize": 135168 00:33:14.660 } 00:33:14.660 } 00:33:14.660 ] 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "subsystem": "sock", 00:33:14.660 "config": [ 00:33:14.660 { 00:33:14.660 "method": "sock_set_default_impl", 00:33:14.660 "params": { 00:33:14.660 "impl_name": "posix" 00:33:14.660 } 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "method": "sock_impl_set_options", 00:33:14.660 "params": { 00:33:14.660 "impl_name": "ssl", 00:33:14.660 "recv_buf_size": 4096, 00:33:14.660 "send_buf_size": 4096, 00:33:14.660 "enable_recv_pipe": true, 00:33:14.660 "enable_quickack": false, 00:33:14.660 "enable_placement_id": 0, 00:33:14.660 "enable_zerocopy_send_server": true, 00:33:14.660 "enable_zerocopy_send_client": false, 00:33:14.660 "zerocopy_threshold": 0, 00:33:14.660 "tls_version": 0, 00:33:14.660 "enable_ktls": false 00:33:14.660 } 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "method": "sock_impl_set_options", 00:33:14.660 "params": { 00:33:14.660 "impl_name": "posix", 00:33:14.660 "recv_buf_size": 2097152, 00:33:14.660 "send_buf_size": 2097152, 00:33:14.660 "enable_recv_pipe": true, 00:33:14.660 "enable_quickack": false, 00:33:14.660 "enable_placement_id": 0, 00:33:14.660 "enable_zerocopy_send_server": true, 00:33:14.660 "enable_zerocopy_send_client": false, 00:33:14.660 "zerocopy_threshold": 0, 00:33:14.660 "tls_version": 0, 00:33:14.660 "enable_ktls": false 00:33:14.660 } 00:33:14.660 } 00:33:14.660 ] 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "subsystem": "vmd", 00:33:14.660 "config": [] 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "subsystem": "accel", 00:33:14.660 "config": [ 00:33:14.660 { 00:33:14.660 "method": "accel_set_options", 00:33:14.660 "params": { 00:33:14.660 "small_cache_size": 128, 00:33:14.660 "large_cache_size": 16, 00:33:14.660 "task_count": 2048, 00:33:14.660 "sequence_count": 2048, 00:33:14.660 "buf_count": 2048 00:33:14.660 } 00:33:14.660 } 00:33:14.660 ] 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "subsystem": "bdev", 00:33:14.660 "config": [ 00:33:14.660 { 00:33:14.660 "method": "bdev_set_options", 00:33:14.660 "params": { 00:33:14.660 "bdev_io_pool_size": 65535, 00:33:14.660 "bdev_io_cache_size": 256, 00:33:14.660 "bdev_auto_examine": true, 00:33:14.660 "iobuf_small_cache_size": 128, 00:33:14.660 "iobuf_large_cache_size": 16 00:33:14.660 } 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "method": "bdev_raid_set_options", 00:33:14.660 "params": { 00:33:14.660 "process_window_size_kb": 1024 00:33:14.660 } 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "method": "bdev_iscsi_set_options", 00:33:14.660 "params": { 00:33:14.660 "timeout_sec": 30 00:33:14.660 } 00:33:14.660 }, 00:33:14.660 { 00:33:14.660 "method": "bdev_nvme_set_options", 00:33:14.660 "params": { 00:33:14.660 "action_on_timeout": "none", 00:33:14.660 "timeout_us": 0, 00:33:14.660 "timeout_admin_us": 0, 00:33:14.660 "keep_alive_timeout_ms": 10000, 00:33:14.660 "arbitration_burst": 0, 00:33:14.660 "low_priority_weight": 0, 00:33:14.660 "medium_priority_weight": 0, 00:33:14.660 "high_priority_weight": 0, 00:33:14.660 "nvme_adminq_poll_period_us": 10000, 00:33:14.660 "nvme_ioq_poll_period_us": 0, 00:33:14.660 "io_queue_requests": 512, 00:33:14.660 "delay_cmd_submit": true, 00:33:14.660 "transport_retry_count": 4, 00:33:14.660 "bdev_retry_count": 3, 00:33:14.660 "transport_ack_timeout": 0, 00:33:14.660 "ctrlr_loss_timeout_sec": 0, 00:33:14.660 "reconnect_delay_sec": 0, 00:33:14.660 "fast_io_fail_timeout_sec": 0, 00:33:14.660 "disable_auto_failback": false, 00:33:14.660 "generate_uuids": false, 00:33:14.660 "transport_tos": 0, 00:33:14.660 "nvme_error_stat": false, 00:33:14.660 "rdma_srq_size": 0, 00:33:14.660 "io_path_stat": false, 00:33:14.660 "allow_accel_sequence": false, 00:33:14.660 "rdma_max_cq_size": 0, 00:33:14.660 "rdma_cm_event_timeout_ms": 0, 00:33:14.660 "dhchap_digests": [ 00:33:14.661 "sha256", 00:33:14.661 "sha384", 00:33:14.661 "sha512" 00:33:14.661 ], 00:33:14.661 "dhchap_dhgroups": [ 00:33:14.661 "null", 00:33:14.661 "ffdhe2048", 00:33:14.661 "ffdhe3072", 00:33:14.661 "ffdhe4096", 00:33:14.661 "ffdhe6144", 00:33:14.661 "ffdhe8192" 00:33:14.661 ] 00:33:14.661 } 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "method": "bdev_nvme_attach_controller", 00:33:14.661 "params": { 00:33:14.661 "name": "nvme0", 00:33:14.661 "trtype": "TCP", 00:33:14.661 "adrfam": "IPv4", 00:33:14.661 "traddr": "127.0.0.1", 00:33:14.661 "trsvcid": "4420", 00:33:14.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:14.661 "prchk_reftag": false, 00:33:14.661 "prchk_guard": false, 00:33:14.661 "ctrlr_loss_timeout_sec": 0, 00:33:14.661 "reconnect_delay_sec": 0, 00:33:14.661 "fast_io_fail_timeout_sec": 0, 00:33:14.661 "psk": "key0", 00:33:14.661 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:14.661 "hdgst": false, 00:33:14.661 "ddgst": false 00:33:14.661 } 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "method": "bdev_nvme_set_hotplug", 00:33:14.661 "params": { 00:33:14.661 "period_us": 100000, 00:33:14.661 "enable": false 00:33:14.661 } 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "method": "bdev_wait_for_examine" 00:33:14.661 } 00:33:14.661 ] 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "subsystem": "nbd", 00:33:14.661 "config": [] 00:33:14.661 } 00:33:14.661 ] 00:33:14.661 }' 00:33:14.661 00:09:29 keyring_file -- keyring/file.sh@114 -- # killprocess 713091 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 713091 ']' 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@946 -- # kill -0 713091 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@947 -- # uname 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 713091 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 713091' 00:33:14.661 killing process with pid 713091 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@961 -- # kill 713091 00:33:14.661 Received shutdown signal, test time was about 1.000000 seconds 00:33:14.661 00:33:14.661 Latency(us) 00:33:14.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.661 =================================================================================================================== 00:33:14.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@966 -- # wait 713091 00:33:14.661 00:09:29 keyring_file -- keyring/file.sh@117 -- # bperfpid=714713 00:33:14.661 00:09:29 keyring_file -- keyring/file.sh@119 -- # waitforlisten 714713 /var/tmp/bperf.sock 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 714713 ']' 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:33:14.661 00:09:29 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:14.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:33:14.661 00:09:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:14.661 00:09:29 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:14.661 "subsystems": [ 00:33:14.661 { 00:33:14.661 "subsystem": "keyring", 00:33:14.661 "config": [ 00:33:14.661 { 00:33:14.661 "method": "keyring_file_add_key", 00:33:14.661 "params": { 00:33:14.661 "name": "key0", 00:33:14.661 "path": "/tmp/tmp.qaS9SaC8Tq" 00:33:14.661 } 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "method": "keyring_file_add_key", 00:33:14.661 "params": { 00:33:14.661 "name": "key1", 00:33:14.661 "path": "/tmp/tmp.c3vlWNKXX7" 00:33:14.661 } 00:33:14.661 } 00:33:14.661 ] 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "subsystem": "iobuf", 00:33:14.661 "config": [ 00:33:14.661 { 00:33:14.661 "method": "iobuf_set_options", 00:33:14.661 "params": { 00:33:14.661 "small_pool_count": 8192, 00:33:14.661 "large_pool_count": 1024, 00:33:14.661 "small_bufsize": 8192, 00:33:14.661 "large_bufsize": 135168 00:33:14.661 } 00:33:14.661 } 00:33:14.661 ] 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "subsystem": "sock", 00:33:14.661 "config": [ 00:33:14.661 { 00:33:14.661 "method": "sock_set_default_impl", 00:33:14.661 "params": { 00:33:14.661 "impl_name": "posix" 00:33:14.661 } 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "method": "sock_impl_set_options", 00:33:14.661 "params": { 00:33:14.661 "impl_name": "ssl", 00:33:14.661 "recv_buf_size": 4096, 00:33:14.661 "send_buf_size": 4096, 00:33:14.661 "enable_recv_pipe": true, 00:33:14.661 "enable_quickack": false, 00:33:14.661 "enable_placement_id": 0, 00:33:14.661 "enable_zerocopy_send_server": true, 00:33:14.661 "enable_zerocopy_send_client": false, 00:33:14.661 "zerocopy_threshold": 0, 00:33:14.661 "tls_version": 0, 00:33:14.661 "enable_ktls": false 00:33:14.661 } 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "method": "sock_impl_set_options", 00:33:14.661 "params": { 00:33:14.661 "impl_name": "posix", 00:33:14.661 "recv_buf_size": 2097152, 00:33:14.661 "send_buf_size": 2097152, 00:33:14.661 "enable_recv_pipe": true, 00:33:14.661 "enable_quickack": false, 00:33:14.661 "enable_placement_id": 0, 00:33:14.661 "enable_zerocopy_send_server": true, 00:33:14.661 "enable_zerocopy_send_client": false, 00:33:14.661 "zerocopy_threshold": 0, 00:33:14.661 "tls_version": 0, 00:33:14.661 "enable_ktls": false 00:33:14.661 } 00:33:14.661 } 00:33:14.661 ] 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "subsystem": "vmd", 00:33:14.661 "config": [] 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "subsystem": "accel", 00:33:14.661 "config": [ 00:33:14.661 { 00:33:14.661 "method": "accel_set_options", 00:33:14.661 "params": { 00:33:14.661 "small_cache_size": 128, 00:33:14.661 "large_cache_size": 16, 00:33:14.661 "task_count": 2048, 00:33:14.661 "sequence_count": 2048, 00:33:14.661 "buf_count": 2048 00:33:14.661 } 00:33:14.661 } 00:33:14.661 ] 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "subsystem": "bdev", 00:33:14.661 "config": [ 00:33:14.661 { 00:33:14.661 "method": "bdev_set_options", 00:33:14.661 "params": { 00:33:14.661 "bdev_io_pool_size": 65535, 00:33:14.661 "bdev_io_cache_size": 256, 00:33:14.661 "bdev_auto_examine": true, 00:33:14.661 "iobuf_small_cache_size": 128, 00:33:14.661 "iobuf_large_cache_size": 16 00:33:14.661 } 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "method": "bdev_raid_set_options", 00:33:14.661 "params": { 00:33:14.661 "process_window_size_kb": 1024 00:33:14.661 } 00:33:14.661 }, 00:33:14.661 { 00:33:14.661 "method": "bdev_iscsi_set_options", 00:33:14.662 "params": { 00:33:14.662 "timeout_sec": 30 00:33:14.662 } 00:33:14.662 }, 00:33:14.662 { 00:33:14.662 "method": "bdev_nvme_set_options", 00:33:14.662 "params": { 00:33:14.662 "action_on_timeout": "none", 00:33:14.662 "timeout_us": 0, 00:33:14.662 "timeout_admin_us": 0, 00:33:14.662 "keep_alive_timeout_ms": 10000, 00:33:14.662 "arbitration_burst": 0, 00:33:14.662 "low_priority_weight": 0, 00:33:14.662 "medium_priority_weight": 0, 00:33:14.662 "high_priority_weight": 0, 00:33:14.662 "nvme_adminq_poll_period_us": 10000, 00:33:14.662 "nvme_ioq_poll_period_us": 0, 00:33:14.662 "io_queue_requests": 512, 00:33:14.662 "delay_cmd_submit": true, 00:33:14.662 "transport_retry_count": 4, 00:33:14.662 "bdev_retry_count": 3, 00:33:14.662 "transport_ack_timeout": 0, 00:33:14.662 "ctrlr_loss_timeout_sec": 0, 00:33:14.662 "reconnect_delay_sec": 0, 00:33:14.662 "fast_io_fail_timeout_sec": 0, 00:33:14.662 "disable_auto_failback": false, 00:33:14.662 "generate_uuids": false, 00:33:14.662 "transport_tos": 0, 00:33:14.662 "nvme_error_stat": false, 00:33:14.662 "rdma_srq_size": 0, 00:33:14.662 "io_path_stat": false, 00:33:14.662 "allow_accel_sequence": false, 00:33:14.662 "rdma_max_cq_size": 0, 00:33:14.662 "rdma_cm_event_timeout_ms": 0, 00:33:14.662 "dhchap_digests": [ 00:33:14.662 "sha256", 00:33:14.662 "sha384", 00:33:14.662 "sha512" 00:33:14.662 ], 00:33:14.662 "dhchap_dhgroups": [ 00:33:14.662 "null", 00:33:14.662 "ffdhe2048", 00:33:14.662 "ffdhe3072", 00:33:14.662 "ffdhe4096", 00:33:14.662 "ffdhe6144", 00:33:14.662 "ffdhe8192" 00:33:14.662 ] 00:33:14.662 } 00:33:14.662 }, 00:33:14.662 { 00:33:14.662 "method": "bdev_nvme_attach_controller", 00:33:14.662 "params": { 00:33:14.662 "name": "nvme0", 00:33:14.662 "trtype": "TCP", 00:33:14.662 "adrfam": "IPv4", 00:33:14.662 "traddr": "127.0.0.1", 00:33:14.662 "trsvcid": "4420", 00:33:14.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:14.662 "prchk_reftag": false, 00:33:14.662 "prchk_guard": false, 00:33:14.662 "ctrlr_loss_timeout_sec": 0, 00:33:14.662 "reconnect_delay_sec": 0, 00:33:14.662 "fast_io_fail_timeout_sec": 0, 00:33:14.662 "psk": "key0", 00:33:14.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:14.662 "hdgst": false, 00:33:14.662 "ddgst": false 00:33:14.662 } 00:33:14.662 }, 00:33:14.662 { 00:33:14.662 "method": "bdev_nvme_set_hotplug", 00:33:14.662 "params": { 00:33:14.662 "period_us": 100000, 00:33:14.662 "enable": false 00:33:14.662 } 00:33:14.662 }, 00:33:14.662 { 00:33:14.662 "method": "bdev_wait_for_examine" 00:33:14.662 } 00:33:14.662 ] 00:33:14.662 }, 00:33:14.662 { 00:33:14.662 "subsystem": "nbd", 00:33:14.662 "config": [] 00:33:14.662 } 00:33:14.662 ] 00:33:14.662 }' 00:33:14.924 [2024-07-16 00:09:29.865520] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:33:14.924 [2024-07-16 00:09:29.865578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid714713 ] 00:33:14.924 [2024-07-16 00:09:29.944961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.924 [2024-07-16 00:09:29.999253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.184 [2024-07-16 00:09:30.141419] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:15.444 00:09:30 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:33:15.444 00:09:30 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:33:15.444 00:09:30 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:15.444 00:09:30 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:15.444 00:09:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.705 00:09:30 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:15.705 00:09:30 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:15.705 00:09:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:15.705 00:09:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:15.705 00:09:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.705 00:09:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:15.705 00:09:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.965 00:09:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:15.965 00:09:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:15.966 00:09:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:15.966 00:09:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:15.966 00:09:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.966 00:09:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.966 00:09:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:15.966 00:09:31 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:15.966 00:09:31 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:15.966 00:09:31 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:15.966 00:09:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:16.226 00:09:31 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:16.226 00:09:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:16.226 00:09:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.qaS9SaC8Tq /tmp/tmp.c3vlWNKXX7 00:33:16.226 00:09:31 keyring_file -- keyring/file.sh@20 -- # killprocess 714713 00:33:16.226 00:09:31 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 714713 ']' 00:33:16.226 00:09:31 keyring_file -- common/autotest_common.sh@946 -- # kill -0 714713 00:33:16.226 00:09:31 keyring_file -- common/autotest_common.sh@947 -- # uname 00:33:16.226 00:09:31 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:33:16.226 00:09:31 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 714713 00:33:16.226 00:09:31 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:33:16.226 00:09:31 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:33:16.226 00:09:31 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 714713' 00:33:16.226 killing process with pid 714713 00:33:16.226 00:09:31 keyring_file -- common/autotest_common.sh@961 -- # kill 714713 00:33:16.226 Received shutdown signal, test time was about 1.000000 seconds 00:33:16.226 00:33:16.226 Latency(us) 00:33:16.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.226 =================================================================================================================== 00:33:16.226 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:16.227 00:09:31 keyring_file -- common/autotest_common.sh@966 -- # wait 714713 00:33:16.486 00:09:31 keyring_file -- keyring/file.sh@21 -- # killprocess 712934 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 712934 ']' 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@946 -- # kill -0 712934 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@947 -- # uname 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 712934 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 712934' 00:33:16.487 killing process with pid 712934 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@961 -- # kill 712934 00:33:16.487 [2024-07-16 00:09:31.501827] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:16.487 00:09:31 keyring_file -- common/autotest_common.sh@966 -- # wait 712934 00:33:16.747 00:33:16.747 real 0m11.016s 00:33:16.747 user 0m25.713s 00:33:16.747 sys 0m2.747s 00:33:16.747 00:09:31 keyring_file -- common/autotest_common.sh@1118 -- # xtrace_disable 00:33:16.747 00:09:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:16.747 ************************************ 00:33:16.747 END TEST keyring_file 00:33:16.747 ************************************ 00:33:16.747 00:09:31 -- common/autotest_common.sh@1136 -- # return 0 00:33:16.747 00:09:31 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:16.747 00:09:31 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:16.747 00:09:31 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:33:16.747 00:09:31 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:33:16.747 00:09:31 -- common/autotest_common.sh@10 -- # set +x 00:33:16.747 ************************************ 00:33:16.747 START TEST keyring_linux 00:33:16.747 ************************************ 00:33:16.747 00:09:31 keyring_linux -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:16.747 * Looking for test storage... 00:33:16.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:16.747 00:09:31 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:16.747 00:09:31 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.747 00:09:31 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.747 00:09:31 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.747 00:09:31 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.747 00:09:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.747 00:09:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.747 00:09:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.747 00:09:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:16.747 00:09:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:16.747 00:09:31 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:16.747 00:09:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:16.747 00:09:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:16.748 00:09:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:16.748 00:09:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:16.748 00:09:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:16.748 00:09:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:16.748 00:09:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:16.748 00:09:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:16.748 00:09:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:16.748 00:09:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:16.748 00:09:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:16.748 00:09:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:16.748 00:09:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:16.748 00:09:31 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:16.748 00:09:31 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:16.748 00:09:31 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:16.748 00:09:31 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:16.748 00:09:31 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:16.748 00:09:31 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:17.007 00:09:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:17.007 00:09:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:17.007 /tmp/:spdk-test:key0 00:33:17.007 00:09:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:17.007 00:09:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:17.007 00:09:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:17.007 00:09:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:17.007 00:09:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:17.007 00:09:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:17.007 00:09:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:17.007 00:09:31 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:17.007 00:09:31 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:17.007 00:09:31 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:17.007 00:09:31 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:17.007 00:09:31 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:17.007 00:09:31 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:17.007 00:09:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:17.007 00:09:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:17.007 /tmp/:spdk-test:key1 00:33:17.007 00:09:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=715319 00:33:17.007 00:09:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 715319 00:33:17.007 00:09:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:17.007 00:09:32 keyring_linux -- common/autotest_common.sh@823 -- # '[' -z 715319 ']' 00:33:17.007 00:09:32 keyring_linux -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.007 00:09:32 keyring_linux -- common/autotest_common.sh@828 -- # local max_retries=100 00:33:17.007 00:09:32 keyring_linux -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.007 00:09:32 keyring_linux -- common/autotest_common.sh@832 -- # xtrace_disable 00:33:17.007 00:09:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:17.007 [2024-07-16 00:09:32.079550] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:33:17.007 [2024-07-16 00:09:32.079609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715319 ] 00:33:17.007 [2024-07-16 00:09:32.148164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.268 [2024-07-16 00:09:32.213322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.838 00:09:32 keyring_linux -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:33:17.838 00:09:32 keyring_linux -- common/autotest_common.sh@856 -- # return 0 00:33:17.838 00:09:32 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:17.838 00:09:32 keyring_linux -- common/autotest_common.sh@553 -- # xtrace_disable 00:33:17.838 00:09:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:17.838 [2024-07-16 00:09:32.890517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.838 null0 00:33:17.839 [2024-07-16 00:09:32.922563] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:17.839 [2024-07-16 00:09:32.922944] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:17.839 00:09:32 keyring_linux -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:33:17.839 00:09:32 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:17.839 1015994011 00:33:17.839 00:09:32 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:17.839 1058394204 00:33:17.839 00:09:32 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=715336 00:33:17.839 00:09:32 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 715336 /var/tmp/bperf.sock 00:33:17.839 00:09:32 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:17.839 00:09:32 keyring_linux -- common/autotest_common.sh@823 -- # '[' -z 715336 ']' 00:33:17.839 00:09:32 keyring_linux -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:17.839 00:09:32 keyring_linux -- common/autotest_common.sh@828 -- # local max_retries=100 00:33:17.839 00:09:32 keyring_linux -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:17.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:17.839 00:09:32 keyring_linux -- common/autotest_common.sh@832 -- # xtrace_disable 00:33:17.839 00:09:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:17.839 [2024-07-16 00:09:32.999808] Starting SPDK v24.09-pre git sha1 a83ad116a / DPDK 24.03.0 initialization... 00:33:17.839 [2024-07-16 00:09:32.999855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715336 ] 00:33:18.099 [2024-07-16 00:09:33.080745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.099 [2024-07-16 00:09:33.134943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.670 00:09:33 keyring_linux -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:33:18.670 00:09:33 keyring_linux -- common/autotest_common.sh@856 -- # return 0 00:33:18.670 00:09:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:18.671 00:09:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:18.932 00:09:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:18.932 00:09:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:18.932 00:09:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:18.932 00:09:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:19.192 [2024-07-16 00:09:34.245802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:19.192 nvme0n1 00:33:19.192 00:09:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:19.192 00:09:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:19.192 00:09:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:19.192 00:09:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:19.192 00:09:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:19.192 00:09:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.452 00:09:34 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:19.452 00:09:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:19.452 00:09:34 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:19.452 00:09:34 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:19.452 00:09:34 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.452 00:09:34 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:19.452 00:09:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.713 00:09:34 keyring_linux -- keyring/linux.sh@25 -- # sn=1015994011 00:33:19.713 00:09:34 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:19.713 00:09:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:19.713 00:09:34 keyring_linux -- keyring/linux.sh@26 -- # [[ 1015994011 == \1\0\1\5\9\9\4\0\1\1 ]] 00:33:19.713 00:09:34 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1015994011 00:33:19.713 00:09:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:19.713 00:09:34 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:19.713 Running I/O for 1 seconds... 00:33:20.653 00:33:20.653 Latency(us) 00:33:20.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.653 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:20.653 nvme0n1 : 1.01 9902.18 38.68 0.00 0.00 12841.38 8028.16 18459.31 00:33:20.653 =================================================================================================================== 00:33:20.653 Total : 9902.18 38.68 0.00 0.00 12841.38 8028.16 18459.31 00:33:20.653 0 00:33:20.653 00:09:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:20.653 00:09:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:20.914 00:09:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:20.914 00:09:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:20.914 00:09:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:20.914 00:09:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:20.914 00:09:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:20.914 00:09:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@642 -- # local es=0 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:21.175 00:09:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:21.175 [2024-07-16 00:09:36.264964] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:21.175 [2024-07-16 00:09:36.265500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1806000 (107): Transport endpoint is not connected 00:33:21.175 [2024-07-16 00:09:36.266496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1806000 (9): Bad file descriptor 00:33:21.175 [2024-07-16 00:09:36.267498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:21.175 [2024-07-16 00:09:36.267509] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:21.175 [2024-07-16 00:09:36.267515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:21.175 request: 00:33:21.175 { 00:33:21.175 "name": "nvme0", 00:33:21.175 "trtype": "tcp", 00:33:21.175 "traddr": "127.0.0.1", 00:33:21.175 "adrfam": "ipv4", 00:33:21.175 "trsvcid": "4420", 00:33:21.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:21.175 "prchk_reftag": false, 00:33:21.175 "prchk_guard": false, 00:33:21.175 "hdgst": false, 00:33:21.175 "ddgst": false, 00:33:21.175 "psk": ":spdk-test:key1", 00:33:21.175 "method": "bdev_nvme_attach_controller", 00:33:21.175 "req_id": 1 00:33:21.175 } 00:33:21.175 Got JSON-RPC error response 00:33:21.175 response: 00:33:21.175 { 00:33:21.175 "code": -5, 00:33:21.175 "message": "Input/output error" 00:33:21.175 } 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@645 -- # es=1 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@33 -- # sn=1015994011 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1015994011 00:33:21.175 1 links removed 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@33 -- # sn=1058394204 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1058394204 00:33:21.175 1 links removed 00:33:21.175 00:09:36 keyring_linux -- keyring/linux.sh@41 -- # killprocess 715336 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@942 -- # '[' -z 715336 ']' 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@946 -- # kill -0 715336 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@947 -- # uname 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 715336 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@960 -- # echo 'killing process with pid 715336' 00:33:21.175 killing process with pid 715336 00:33:21.175 00:09:36 keyring_linux -- common/autotest_common.sh@961 -- # kill 715336 00:33:21.175 Received shutdown signal, test time was about 1.000000 seconds 00:33:21.175 00:33:21.175 Latency(us) 00:33:21.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.176 =================================================================================================================== 00:33:21.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:21.176 00:09:36 keyring_linux -- common/autotest_common.sh@966 -- # wait 715336 00:33:21.437 00:09:36 keyring_linux -- keyring/linux.sh@42 -- # killprocess 715319 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@942 -- # '[' -z 715319 ']' 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@946 -- # kill -0 715319 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@947 -- # uname 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 715319 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@960 -- # echo 'killing process with pid 715319' 00:33:21.437 killing process with pid 715319 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@961 -- # kill 715319 00:33:21.437 00:09:36 keyring_linux -- common/autotest_common.sh@966 -- # wait 715319 00:33:21.698 00:33:21.698 real 0m4.943s 00:33:21.698 user 0m8.536s 00:33:21.698 sys 0m1.479s 00:33:21.698 00:09:36 keyring_linux -- common/autotest_common.sh@1118 -- # xtrace_disable 00:33:21.698 00:09:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:21.698 ************************************ 00:33:21.698 END TEST keyring_linux 00:33:21.698 ************************************ 00:33:21.698 00:09:36 -- common/autotest_common.sh@1136 -- # return 0 00:33:21.698 00:09:36 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:21.698 00:09:36 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:21.698 00:09:36 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:21.698 00:09:36 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:21.698 00:09:36 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:21.698 00:09:36 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:21.698 00:09:36 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:21.698 00:09:36 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:21.699 00:09:36 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:21.699 00:09:36 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:21.699 00:09:36 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:21.699 00:09:36 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:21.699 00:09:36 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:21.699 00:09:36 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:21.699 00:09:36 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:21.699 00:09:36 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:21.699 00:09:36 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:21.699 00:09:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:21.699 00:09:36 -- common/autotest_common.sh@10 -- # set +x 00:33:21.699 00:09:36 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:21.699 00:09:36 -- common/autotest_common.sh@1386 -- # local autotest_es=0 00:33:21.699 00:09:36 -- common/autotest_common.sh@1387 -- # xtrace_disable 00:33:21.699 00:09:36 -- common/autotest_common.sh@10 -- # set +x 00:33:34.002 INFO: APP EXITING 00:33:34.002 INFO: killing all VMs 00:33:34.002 INFO: killing vhost app 00:33:34.002 INFO: EXIT DONE 00:33:37.304 Waiting for block devices as requested 00:33:37.304 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:37.304 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:37.304 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:37.304 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:37.304 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:37.304 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:37.304 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:37.564 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:37.564 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:37.824 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:37.824 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:37.824 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:37.824 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:38.084 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:38.084 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:38.084 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:38.084 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:42.290 Cleaning 00:33:42.290 Removing: /var/run/dpdk/spdk0/config 00:33:42.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:42.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:42.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:42.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:42.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:42.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:42.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:42.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:42.290 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:42.290 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:42.290 Removing: /var/run/dpdk/spdk1/config 00:33:42.290 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:42.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:42.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:42.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:42.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:42.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:42.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:42.291 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:42.291 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:42.291 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:42.291 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:42.291 Removing: /var/run/dpdk/spdk2/config 00:33:42.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:42.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:42.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:42.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:42.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:42.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:42.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:42.291 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:42.291 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:42.291 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:42.291 Removing: /var/run/dpdk/spdk3/config 00:33:42.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:42.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:42.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:42.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:42.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:42.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:42.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:42.291 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:42.291 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:42.291 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:42.291 Removing: /var/run/dpdk/spdk4/config 00:33:42.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:42.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:42.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:42.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:42.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:42.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:42.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:42.291 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:42.291 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:42.291 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:42.291 Removing: /dev/shm/bdev_svc_trace.1 00:33:42.291 Removing: /dev/shm/nvmf_trace.0 00:33:42.291 Removing: /dev/shm/spdk_tgt_trace.pid229836 00:33:42.291 Removing: /var/run/dpdk/spdk0 00:33:42.291 Removing: /var/run/dpdk/spdk1 00:33:42.291 Removing: /var/run/dpdk/spdk2 00:33:42.291 Removing: /var/run/dpdk/spdk3 00:33:42.291 Removing: /var/run/dpdk/spdk4 00:33:42.291 Removing: /var/run/dpdk/spdk_pid228293 00:33:42.291 Removing: /var/run/dpdk/spdk_pid229836 00:33:42.291 Removing: /var/run/dpdk/spdk_pid230398 00:33:42.291 Removing: /var/run/dpdk/spdk_pid231615 00:33:42.291 Removing: /var/run/dpdk/spdk_pid231760 00:33:42.291 Removing: /var/run/dpdk/spdk_pid233043 00:33:42.291 Removing: /var/run/dpdk/spdk_pid233140 00:33:42.291 Removing: /var/run/dpdk/spdk_pid233567 00:33:42.291 Removing: /var/run/dpdk/spdk_pid234461 00:33:42.291 Removing: /var/run/dpdk/spdk_pid235164 00:33:42.291 Removing: /var/run/dpdk/spdk_pid235545 00:33:42.291 Removing: /var/run/dpdk/spdk_pid235729 00:33:42.291 Removing: /var/run/dpdk/spdk_pid236034 00:33:42.291 Removing: /var/run/dpdk/spdk_pid236404 00:33:42.291 Removing: /var/run/dpdk/spdk_pid236835 00:33:42.291 Removing: /var/run/dpdk/spdk_pid237215 00:33:42.291 Removing: /var/run/dpdk/spdk_pid237439 00:33:42.552 Removing: /var/run/dpdk/spdk_pid238329 00:33:42.552 Removing: /var/run/dpdk/spdk_pid242106 00:33:42.552 Removing: /var/run/dpdk/spdk_pid242499 00:33:42.552 Removing: /var/run/dpdk/spdk_pid242796 00:33:42.552 Removing: /var/run/dpdk/spdk_pid243107 00:33:42.552 Removing: /var/run/dpdk/spdk_pid243482 00:33:42.552 Removing: /var/run/dpdk/spdk_pid243693 00:33:42.552 Removing: /var/run/dpdk/spdk_pid244169 00:33:42.552 Removing: /var/run/dpdk/spdk_pid244197 00:33:42.552 Removing: /var/run/dpdk/spdk_pid244556 00:33:42.552 Removing: /var/run/dpdk/spdk_pid244647 00:33:42.552 Removing: /var/run/dpdk/spdk_pid244928 00:33:42.552 Removing: /var/run/dpdk/spdk_pid245089 00:33:42.552 Removing: /var/run/dpdk/spdk_pid245661 00:33:42.552 Removing: /var/run/dpdk/spdk_pid245815 00:33:42.552 Removing: /var/run/dpdk/spdk_pid246131 00:33:42.552 Removing: /var/run/dpdk/spdk_pid246499 00:33:42.552 Removing: /var/run/dpdk/spdk_pid246524 00:33:42.552 Removing: /var/run/dpdk/spdk_pid246822 00:33:42.552 Removing: /var/run/dpdk/spdk_pid247014 00:33:42.552 Removing: /var/run/dpdk/spdk_pid247295 00:33:42.552 Removing: /var/run/dpdk/spdk_pid247647 00:33:42.552 Removing: /var/run/dpdk/spdk_pid247997 00:33:42.552 Removing: /var/run/dpdk/spdk_pid248317 00:33:42.552 Removing: /var/run/dpdk/spdk_pid248504 00:33:42.552 Removing: /var/run/dpdk/spdk_pid248735 00:33:42.552 Removing: /var/run/dpdk/spdk_pid249093 00:33:42.552 Removing: /var/run/dpdk/spdk_pid249440 00:33:42.552 Removing: /var/run/dpdk/spdk_pid249789 00:33:42.552 Removing: /var/run/dpdk/spdk_pid249998 00:33:42.552 Removing: /var/run/dpdk/spdk_pid250198 00:33:42.552 Removing: /var/run/dpdk/spdk_pid250529 00:33:42.552 Removing: /var/run/dpdk/spdk_pid250884 00:33:42.552 Removing: /var/run/dpdk/spdk_pid251234 00:33:42.552 Removing: /var/run/dpdk/spdk_pid251489 00:33:42.552 Removing: /var/run/dpdk/spdk_pid251697 00:33:42.552 Removing: /var/run/dpdk/spdk_pid251976 00:33:42.553 Removing: /var/run/dpdk/spdk_pid252334 00:33:42.553 Removing: /var/run/dpdk/spdk_pid252682 00:33:42.553 Removing: /var/run/dpdk/spdk_pid252748 00:33:42.553 Removing: /var/run/dpdk/spdk_pid253158 00:33:42.553 Removing: /var/run/dpdk/spdk_pid258273 00:33:42.553 Removing: /var/run/dpdk/spdk_pid315899 00:33:42.553 Removing: /var/run/dpdk/spdk_pid321536 00:33:42.553 Removing: /var/run/dpdk/spdk_pid333771 00:33:42.553 Removing: /var/run/dpdk/spdk_pid340760 00:33:42.553 Removing: /var/run/dpdk/spdk_pid346273 00:33:42.553 Removing: /var/run/dpdk/spdk_pid347182 00:33:42.553 Removing: /var/run/dpdk/spdk_pid355261 00:33:42.553 Removing: /var/run/dpdk/spdk_pid363072 00:33:42.553 Removing: /var/run/dpdk/spdk_pid363109 00:33:42.553 Removing: /var/run/dpdk/spdk_pid364112 00:33:42.553 Removing: /var/run/dpdk/spdk_pid365112 00:33:42.553 Removing: /var/run/dpdk/spdk_pid366126 00:33:42.553 Removing: /var/run/dpdk/spdk_pid366803 00:33:42.553 Removing: /var/run/dpdk/spdk_pid366893 00:33:42.553 Removing: /var/run/dpdk/spdk_pid367143 00:33:42.553 Removing: /var/run/dpdk/spdk_pid367369 00:33:42.553 Removing: /var/run/dpdk/spdk_pid367474 00:33:42.553 Removing: /var/run/dpdk/spdk_pid368479 00:33:42.553 Removing: /var/run/dpdk/spdk_pid369486 00:33:42.814 Removing: /var/run/dpdk/spdk_pid370492 00:33:42.814 Removing: /var/run/dpdk/spdk_pid371170 00:33:42.814 Removing: /var/run/dpdk/spdk_pid371176 00:33:42.814 Removing: /var/run/dpdk/spdk_pid371504 00:33:42.814 Removing: /var/run/dpdk/spdk_pid372935 00:33:42.814 Removing: /var/run/dpdk/spdk_pid374180 00:33:42.814 Removing: /var/run/dpdk/spdk_pid384760 00:33:42.814 Removing: /var/run/dpdk/spdk_pid385218 00:33:42.814 Removing: /var/run/dpdk/spdk_pid390793 00:33:42.814 Removing: /var/run/dpdk/spdk_pid398750 00:33:42.814 Removing: /var/run/dpdk/spdk_pid401832 00:33:42.814 Removing: /var/run/dpdk/spdk_pid415034 00:33:42.814 Removing: /var/run/dpdk/spdk_pid426767 00:33:42.814 Removing: /var/run/dpdk/spdk_pid428824 00:33:42.814 Removing: /var/run/dpdk/spdk_pid430056 00:33:42.814 Removing: /var/run/dpdk/spdk_pid451967 00:33:42.814 Removing: /var/run/dpdk/spdk_pid457664 00:33:42.814 Removing: /var/run/dpdk/spdk_pid488469 00:33:42.814 Removing: /var/run/dpdk/spdk_pid494369 00:33:42.814 Removing: /var/run/dpdk/spdk_pid496346 00:33:42.814 Removing: /var/run/dpdk/spdk_pid499108 00:33:42.814 Removing: /var/run/dpdk/spdk_pid499305 00:33:42.814 Removing: /var/run/dpdk/spdk_pid499468 00:33:42.814 Removing: /var/run/dpdk/spdk_pid499805 00:33:42.814 Removing: /var/run/dpdk/spdk_pid500484 00:33:42.814 Removing: /var/run/dpdk/spdk_pid502535 00:33:42.814 Removing: /var/run/dpdk/spdk_pid503613 00:33:42.814 Removing: /var/run/dpdk/spdk_pid504304 00:33:42.814 Removing: /var/run/dpdk/spdk_pid506704 00:33:42.814 Removing: /var/run/dpdk/spdk_pid507409 00:33:42.814 Removing: /var/run/dpdk/spdk_pid508111 00:33:42.814 Removing: /var/run/dpdk/spdk_pid513566 00:33:42.814 Removing: /var/run/dpdk/spdk_pid526797 00:33:42.814 Removing: /var/run/dpdk/spdk_pid531556 00:33:42.814 Removing: /var/run/dpdk/spdk_pid539213 00:33:42.814 Removing: /var/run/dpdk/spdk_pid540729 00:33:42.814 Removing: /var/run/dpdk/spdk_pid542508 00:33:42.814 Removing: /var/run/dpdk/spdk_pid548825 00:33:42.814 Removing: /var/run/dpdk/spdk_pid554234 00:33:42.814 Removing: /var/run/dpdk/spdk_pid564331 00:33:42.814 Removing: /var/run/dpdk/spdk_pid564336 00:33:42.814 Removing: /var/run/dpdk/spdk_pid569949 00:33:42.814 Removing: /var/run/dpdk/spdk_pid570073 00:33:42.814 Removing: /var/run/dpdk/spdk_pid570400 00:33:42.814 Removing: /var/run/dpdk/spdk_pid570823 00:33:42.814 Removing: /var/run/dpdk/spdk_pid570926 00:33:42.814 Removing: /var/run/dpdk/spdk_pid576800 00:33:42.814 Removing: /var/run/dpdk/spdk_pid577622 00:33:42.814 Removing: /var/run/dpdk/spdk_pid583456 00:33:42.814 Removing: /var/run/dpdk/spdk_pid586497 00:33:42.814 Removing: /var/run/dpdk/spdk_pid593549 00:33:42.814 Removing: /var/run/dpdk/spdk_pid600636 00:33:42.814 Removing: /var/run/dpdk/spdk_pid611571 00:33:42.814 Removing: /var/run/dpdk/spdk_pid620728 00:33:42.814 Removing: /var/run/dpdk/spdk_pid620763 00:33:42.814 Removing: /var/run/dpdk/spdk_pid644693 00:33:42.814 Removing: /var/run/dpdk/spdk_pid645381 00:33:42.814 Removing: /var/run/dpdk/spdk_pid646144 00:33:42.814 Removing: /var/run/dpdk/spdk_pid646976 00:33:42.814 Removing: /var/run/dpdk/spdk_pid647870 00:33:43.076 Removing: /var/run/dpdk/spdk_pid648663 00:33:43.076 Removing: /var/run/dpdk/spdk_pid649468 00:33:43.076 Removing: /var/run/dpdk/spdk_pid650182 00:33:43.076 Removing: /var/run/dpdk/spdk_pid655692 00:33:43.076 Removing: /var/run/dpdk/spdk_pid655936 00:33:43.076 Removing: /var/run/dpdk/spdk_pid664191 00:33:43.076 Removing: /var/run/dpdk/spdk_pid664571 00:33:43.076 Removing: /var/run/dpdk/spdk_pid667078 00:33:43.076 Removing: /var/run/dpdk/spdk_pid674855 00:33:43.076 Removing: /var/run/dpdk/spdk_pid674861 00:33:43.076 Removing: /var/run/dpdk/spdk_pid681527 00:33:43.076 Removing: /var/run/dpdk/spdk_pid683726 00:33:43.076 Removing: /var/run/dpdk/spdk_pid686239 00:33:43.076 Removing: /var/run/dpdk/spdk_pid687561 00:33:43.076 Removing: /var/run/dpdk/spdk_pid689947 00:33:43.076 Removing: /var/run/dpdk/spdk_pid691470 00:33:43.076 Removing: /var/run/dpdk/spdk_pid702110 00:33:43.076 Removing: /var/run/dpdk/spdk_pid702670 00:33:43.076 Removing: /var/run/dpdk/spdk_pid703336 00:33:43.076 Removing: /var/run/dpdk/spdk_pid706389 00:33:43.076 Removing: /var/run/dpdk/spdk_pid707072 00:33:43.076 Removing: /var/run/dpdk/spdk_pid707555 00:33:43.076 Removing: /var/run/dpdk/spdk_pid712934 00:33:43.076 Removing: /var/run/dpdk/spdk_pid713091 00:33:43.076 Removing: /var/run/dpdk/spdk_pid714713 00:33:43.076 Removing: /var/run/dpdk/spdk_pid715319 00:33:43.076 Removing: /var/run/dpdk/spdk_pid715336 00:33:43.076 Clean 00:33:43.076 00:09:58 -- common/autotest_common.sh@1445 -- # return 0 00:33:43.076 00:09:58 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:43.076 00:09:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:43.076 00:09:58 -- common/autotest_common.sh@10 -- # set +x 00:33:43.076 00:09:58 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:43.076 00:09:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:43.076 00:09:58 -- common/autotest_common.sh@10 -- # set +x 00:33:43.338 00:09:58 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:43.338 00:09:58 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:43.338 00:09:58 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:43.338 00:09:58 -- spdk/autotest.sh@391 -- # hash lcov 00:33:43.338 00:09:58 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:43.338 00:09:58 -- spdk/autotest.sh@393 -- # hostname 00:33:43.338 00:09:58 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:43.338 geninfo: WARNING: invalid characters removed from testname! 00:34:09.924 00:10:22 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:10.495 00:10:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:12.409 00:10:27 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:13.795 00:10:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:15.181 00:10:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:17.111 00:10:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:18.542 00:10:33 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:18.542 00:10:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.542 00:10:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:18.542 00:10:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.542 00:10:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.542 00:10:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.542 00:10:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.542 00:10:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.542 00:10:33 -- paths/export.sh@5 -- $ export PATH 00:34:18.542 00:10:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.542 00:10:33 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:18.542 00:10:33 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:18.542 00:10:33 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721081433.XXXXXX 00:34:18.542 00:10:33 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721081433.M0fkxF 00:34:18.542 00:10:33 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:18.542 00:10:33 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:18.542 00:10:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:18.542 00:10:33 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:18.542 00:10:33 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:18.542 00:10:33 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:18.542 00:10:33 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:34:18.542 00:10:33 -- common/autotest_common.sh@10 -- $ set +x 00:34:18.542 00:10:33 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:18.542 00:10:33 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:18.542 00:10:33 -- pm/common@17 -- $ local monitor 00:34:18.542 00:10:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:18.542 00:10:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:18.542 00:10:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:18.542 00:10:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:18.542 00:10:33 -- pm/common@21 -- $ date +%s 00:34:18.542 00:10:33 -- pm/common@21 -- $ date +%s 00:34:18.542 00:10:33 -- pm/common@25 -- $ sleep 1 00:34:18.542 00:10:33 -- pm/common@21 -- $ date +%s 00:34:18.542 00:10:33 -- pm/common@21 -- $ date +%s 00:34:18.542 00:10:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721081433 00:34:18.542 00:10:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721081433 00:34:18.542 00:10:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721081433 00:34:18.542 00:10:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721081433 00:34:18.542 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721081433_collect-vmstat.pm.log 00:34:18.542 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721081433_collect-cpu-load.pm.log 00:34:18.542 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721081433_collect-cpu-temp.pm.log 00:34:18.542 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721081433_collect-bmc-pm.bmc.pm.log 00:34:19.484 00:10:34 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:19.484 00:10:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:19.484 00:10:34 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:19.484 00:10:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:19.484 00:10:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:19.484 00:10:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:19.484 00:10:34 -- common/autotest_common.sh@728 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:19.484 00:10:34 -- common/autotest_common.sh@729 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:19.484 00:10:34 -- common/autotest_common.sh@731 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:19.484 00:10:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:19.484 00:10:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:19.484 00:10:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:19.484 00:10:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:19.484 00:10:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.484 00:10:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:19.484 00:10:34 -- pm/common@44 -- $ pid=730097 00:34:19.484 00:10:34 -- pm/common@50 -- $ kill -TERM 730097 00:34:19.484 00:10:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.484 00:10:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:19.484 00:10:34 -- pm/common@44 -- $ pid=730098 00:34:19.484 00:10:34 -- pm/common@50 -- $ kill -TERM 730098 00:34:19.484 00:10:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.484 00:10:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:19.484 00:10:34 -- pm/common@44 -- $ pid=730101 00:34:19.484 00:10:34 -- pm/common@50 -- $ kill -TERM 730101 00:34:19.484 00:10:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.484 00:10:34 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:19.484 00:10:34 -- pm/common@44 -- $ pid=730125 00:34:19.484 00:10:34 -- pm/common@50 -- $ sudo -E kill -TERM 730125 00:34:19.744 + [[ -n 105781 ]] 00:34:19.744 + sudo kill 105781 00:34:19.755 [Pipeline] } 00:34:19.770 [Pipeline] // stage 00:34:19.775 [Pipeline] } 00:34:19.786 [Pipeline] // timeout 00:34:19.789 [Pipeline] } 00:34:19.800 [Pipeline] // catchError 00:34:19.803 [Pipeline] } 00:34:19.813 [Pipeline] // wrap 00:34:19.817 [Pipeline] } 00:34:19.832 [Pipeline] // catchError 00:34:19.840 [Pipeline] stage 00:34:19.842 [Pipeline] { (Epilogue) 00:34:19.854 [Pipeline] catchError 00:34:19.855 [Pipeline] { 00:34:19.869 [Pipeline] echo 00:34:19.871 Cleanup processes 00:34:19.877 [Pipeline] sh 00:34:20.164 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:20.164 730204 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:20.164 730653 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:20.178 [Pipeline] sh 00:34:20.465 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:20.465 ++ grep -v 'sudo pgrep' 00:34:20.465 ++ awk '{print $1}' 00:34:20.465 + sudo kill -9 730204 00:34:20.478 [Pipeline] sh 00:34:20.767 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:33.013 [Pipeline] sh 00:34:33.301 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:33.301 Artifacts sizes are good 00:34:33.317 [Pipeline] archiveArtifacts 00:34:33.328 Archiving artifacts 00:34:33.572 [Pipeline] sh 00:34:33.913 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:33.931 [Pipeline] cleanWs 00:34:33.942 [WS-CLEANUP] Deleting project workspace... 00:34:33.942 [WS-CLEANUP] Deferred wipeout is used... 00:34:33.950 [WS-CLEANUP] done 00:34:33.952 [Pipeline] } 00:34:33.972 [Pipeline] // catchError 00:34:33.985 [Pipeline] sh 00:34:34.273 + logger -p user.info -t JENKINS-CI 00:34:34.283 [Pipeline] } 00:34:34.296 [Pipeline] // stage 00:34:34.300 [Pipeline] } 00:34:34.314 [Pipeline] // node 00:34:34.320 [Pipeline] End of Pipeline 00:34:34.353 Finished: SUCCESS